Machine learning research problems and how organizations can derive insights from scientific papers

Machine learning
research
problems

Machine learning

Decoding the complex world of AI research: a roadmap for businesses to navigate and capitalize on Machine Learning innovations.

Machine learning research has changed. No one is no longer interested in applying machine learning methods and techniques to real-world problems.

There’s plenty of information about AI and ML on the web.

However, if you look closely at new data, you’ll find that most ML-related blogs fall into one of these two categories:

  • High-level marketing fluff – full of unproven claims, with lots of buzzwords like “robots” and “automation” thrown into it – that is functionally useless;
  • More detailed technical materials that remind you of TensorFlow courses rather than explanatory articles, from which average business people can only derive valuable insights.

Another problem is that top-tier machine learning researchers, whose publications often describe novel architectures for machine learning problems and breakthrough performances in isolated tasks, tend to focus on something other than real-world applications.

They generally consider aiming AI at high-impact, real business problems to be ‘of limited significance.’

In this post, we’ll talk briefly about the current AI problems, why AI and other machine learning techniques, applications, and algorithms are treated differently in the world of business and academia, and how an average organization – a product or service provider – can identify AI opportunities quickly and act on them.

Machine learning research is an endless race to the top

Information discovery/dissemination processes and the overall research culture in AI/ML have changed dramatically in recent years.

The field has grown, switched almost entirely to the conference publication model (journal articles are rarely published now), and become more competitive.

The rush to publish

Researchers constantly rush to put their ideas out. However, they need more rigor to be more systematic before someone else presents a similar concept/method/architecture and steps all over them. Or they skip the conventional review and iterative improvement processes to make a conference deadline.

As a result, we get surface-level productiveness in data science. A plethora of papers are being released quickly (which seemingly drives the progress in the data quality field further along). Still, many of them need more in-depth knowledge and are rife with errors, incremental or otherwise, of poor quality. Most of these works would only be submitted after.

The importance of slow science

Fundamental advances are a product of meticulous testing – a slow science requiring researchers to step back, carefully assess, and verify ideas and statements before putting them out. Sadly, many in the ML community have strayed away from this principle for the past few years for attention, bibliometrics, and other short-term success.

Alarming trends in AI research

Here are some other alarming trends in reinforcement learning we’ve observed in recent AI-related scientific works (this is particularly relevant to deep reinforcement and machine learning technology itself)

  • Authors use technical jargon and mathematics excessively and unnecessarily, which obfuscates rather than clarifies their message. Presumably, this is done to impress. But, the confusion of technical and non-technical concepts, which we frequently see in recent publications, leaves the readers needing clarification instead.
  • They casually and carelessly throw around terms of art and misuse language (using novel terms with colloquial connotations and overloading established technical terms are the most common examples of this).
  • They speculate instead of explaining and often fail to distinguish between the two.
  • They emphasize the least important – but most sensational – aspects of their work without correctly identifying the sources of empirical gains (e.g., focusing on trivial architectural modifications in a neural net when the hyper-tuning of its parameters made all the gains possible.)
  • They don’t focus on real-world applications and high-impact results and only care about introducing novel concepts that might interest reviewers. The word “application” in a research paper, many believe, can lead to it being marginalized at conferences and not receiving much attention.

Keeping these in mind, AI-related scientific works, particularly the ones speaking about the phenomenon of a machine learning algorithm, tend to be narrowminded and need an excellent logical foundation. The authors focus on following the existing trends rather than seeking truth.

What a good AI research paper/article should include to derive insights with machine learning

This troubling proliferation of shoddy papers (currently one of the most pressing problems in artificial intelligence) came about due to the recent success of deep and machine learning models, which, paradoxically, has been caused by authors relying on thorough empirical investigations labeled data used in their research and putting out detailed characterizations of principles (which were well-tested preliminary) for training neural networks.

In the current culture of computer science, which seems to be both moving forward and decaying at the same time, it’s important to remember what qualities a good paper or article by computer scientists should possess:

  • Include proper terminology – precise, empowering for the reader, not misleading, without unproven connotations, not conflated with related concepts that are distinct;
  • Highlight how theoretical analysis of the issue at hand relates to empirical or intuitive claims;
  • Only present conclusions when there’s enough factual evidence to back them up (this is particularly important for business applications of ML)
  • Provide intuition to help the reader comprehend the matter; conduct a careful empirical investigation in which multiple approaches are thoroughly evaluated and ruled out before the best one is selected.

As business people, we ought not to chase the latest architectures and shiny new ultra-deep, unexplainable models; we must look for something completely different – simplicity, reliability, and results that can be replicated easily.

Tried and tested machine learning model and implementation practices

Companies trying to harness AI don’t just need powerful machine learning algorithms; we’ve seen time and again that division and random forest, primitive as they might seem, can produce just as high an AOC score (and help the organization achieve as high an ROI) as a deep, novel neural network without the pain of a lengthy setup and without you having to go through labeling colossal amounts of training data yourself.

What it comes down to really is scaling – we must work out a plan to quickly turn an excellent Juniper notebook into a real-world deployment.

How to make the most out of an AI initiative and machine learning algorithms

In the rapidly evolving landscape of AI and ML, the collaboration between business leaders and data scientists is more critical than ever. This piece highlights the importance of shared vision and understanding between these two groups, their challenges, and the best practices for effective collaboration.

The importance of shared vision and understanding

Shared vision and understanding are crucial. It’s common for business people and PMs to make all decisions regarding the overall direction of an AI project and for data scientists to oversee architecture selection, experimentation, and model deployment. This is logical.

The former group has insight into creating business value, optimizing operational decisions in corporate settings, and implementing far-reaching organizational changes, which AI integration requires. The latter group possesses computer vision and a deep understanding of available and obtainable data. It can estimate the feasibility of engineering a machine learning method or algorithm in a reasonable time—details from which business people are typically detached.

Bridging the communication gap between business and data science teams

Group communication occurs through joint discussions, presentations, and flip-chart sharing. Usually, the business side of things is handled first; senior management green-lights the idea at a high level, and only after that are the data scientists engaged.

However, since the groups use different jargon terminology and have profoundly different backgrounds, they might still understand the critical aspects of the project differently. Getting on the same page early often leads to loops in re-defining the assignment or, if left unchecked, to release products that seemingly fulfill the plan but don’t meet the project’s and customers’ requirements.

Understanding project aspects: A collaborative approach

In this manner, the whole, diverse project team’s expertise is captured, and all project aspects, from trivial to crucial, are considered. Technical details such as irrelevant features such as prediction targets are not just discussed between the data science team; they are explained clearly to the stakeholders, and the technology’s possible impact on organizational structures and decision optimization is discussed upfront.

Business perspective: Creating value and defining success

From this point, specific questions about the business view of the opportunity are explored:

  • How does the technology create value for the organization? For example, which specific problem does it solve? Is the best use case a substantial improvement of an existing offering or a launch of a new one?
  • How is profound learning success defined? The data science team might be tempted to concentrate on metrics assessing the machine learning model’s predictions. However, these predictions usually have little to do with the quality of the AI’s operational decisions, which are typically assessed not by technical metrics but with the help of KPIs. 

Ignoring technical metrics is not suggested. Instead, it’s essential to understand that several layers of noisy data usually separate the prediction of a machine-learning model and the operational decision stemming from it.

Organizational changes and employee training

After carefully exploring the new business opportunity, what steps are taken to make all associated decisions explicit and specify every objective? Changes the organization will have to undergo to accommodate the switch in operational decision management are also considered: If machines fully or partially conduct some of the operations, how are employees trained so that their efforts and AI complement each other? Is additional training required?

Diving into technical specifics

At this stage, the technical part is delved into:

  • What predictions should the AI machine learning system output to fulfill the objectives?
  • What feature variables should be considered for the machine to output the most accurate predictions, and based on the potential features, which data sources should be considered first?

Data considerations: quality and sources

The AI will also need adequate processing capabilities to handle the input data and make predictions. Is there sufficient storage and computing capacity? Where will the model operate (private data center, cloud)?

Addressing constraints and security concerns

How are case-specific constraints dealt with? Will the model have to make predictions within certain time frames? Can specific requirements in terms of data security and privacy be met?

Monitoring success and handling deviations

Finally, once success criteria have been defined, how is success monitored? Are there genuinely relevant metrics? Should there be a concrete plan for handling deviations from the allowed range of data points and incidents should they occur?

The success of any AI initiative hinges on the seamless collaboration between business and technical teams. A shared vision, open communication, and a clear understanding of both business and technical objectives are essential for navigating the complexities of AI projects. By acknowledging and addressing the challenges outlined in this piece, organizations can pave the way for more effective and impactful AI implementations.

To sum up

Although ML has transformed entire sectors and industries, the disruption could happen faster due to fundamental limitations and the community’s obsession with novel methods rather than real-world applications.

As many have pointed out, the latest research papers exhibit troubling patterns such as misuse of language, excessive mathiness labeled incomplete data that obfuscates the message, and failure to identify sources of gains and distinguish between explanation accurate prediction and speculations. Their value could be better even for the scientific community and those trying to apply ML to real business problems.

If you want to learn more about applying AI to real-world business tasks, contact our experts for a free consultation.

Other articles

or

Book a meeting

Call (Toll-Free*) +1 (800) 917-0207

Zoom 30 min

* US and Canada, exceptions apply

Ready to innovate your business?

We are! Let’s kick-off our journey to success!