Gartner: AI is moving fast and will be ready for prime time sooner than you think – TechRepublic

Companies have two to three years to lay the groundwork for successful use of generative AI, synthetic data and orchestration platforms.
Gartner analysts predict that numerous AI initiatives will move quickly from the first stage of the hype cycle to the final one over the next two to five years.
Users want more than artificial intelligence can provide at the moment but those capabilities are changing fast, according to Gartner’s Hype Cycle for Artificial Intelligence 2021 report. Gartner analysts described 34 types of AI technologies in the report and also noted that the AI hype cycle is more fast-paced, with an above-average number of innovations reaching mainstream adoption within two to five years.
Gartner analysts found more innovations in the innovation trigger phase of the hype cycle than usual. That means that end users are looking for specific technology capabilities that current AI tools can’t quite deliver yet. Synthetic data, orchestration platforms, composite AI, governance, human-centered AI and generative AI are all in this early phase.
More familiar technologies, such as edge AI, decision intelligence and knowledge graphs, are at the peak of inflated expectations phase of the hype cycle, while chatbots, autonomous vehicles and computer vision are all in the trough of disillusionment.
SEE: Salesforce rolls out AI-powered workflows, contact center updates in Service Cloud
Gartner analysts Shubhangi Vashisth and Svetlana Sicular wrote the report and identified these four AI mega trends:
Vashisth and Sicular also see an increased focus on minimum viable products and accelerated AI development cycles, which they see as an important best practice. 
These six technologies are all in the “innovation trigger” phase of the hype cycle and are expected to hit the plateau of productivity (the end of the hype cycle) within two to five years:
Here is a brief description of each type of AI, based on Gartner’s hype cycle report.
This approach to AI combines various techniques to expand the level of knowledge representations and solve more business problems more efficiently. The goal is to build AI solutions that need less data and energy to learn. The idea is that this approach will make the tech available to companies that don’t have large amounts of data but do have significant human expertise. This technology is emerging, according to Gartner, and has penetrated 5 to 20% of the target market. 
This technique is best when there is not enough data for traditional analysis or when the “required type of intelligence is very hard to represent in current artificial neural networks.”
Companies use AIOAP to standardize DataOps, ModelOps, MLOps and deployment pipelines and put governance practices in place. This technology also unifies development, delivery and operational contexts, particularly around reusing components such as feature and model stores, monitoring, experiment management, model performance and lineage tracking. This trend is being driven by problems created by traditional siloed approaches of data management and analysis. AIOAP is emerging and has reached 1% to 5% of the target audience.
SEE: Open source powers AI, yet policymakers haven’t seemed to notice
To implement AIOAP, Gartner recommends that companies audit current data and analytics practices, simplify data and analytic processes and use cloud service provider environments. 
AI governance is the practice of establishing accountability for the risks that come with using AI. Government leaders in Japan, the U.S. and Canada are setting guard rails for AI with some voluntary guidance and some binding. The analysts note that AI without governance is dangerous but putting rules in place can help establish accountability. 
Governance efforts should not be stand-alone initiatives and should address:
Governance is emerging and has reached 1% to 5% of the target audience. 
Companies should set risk guidelines based on business risk appetite and regulations and ensure that humans are in the loop to mitigate AI deficiencies. 
This type of AI applies what it has learned to create new content, such as text, images, video and audio files. Generative AI is most relevant to life sciences, healthcare, manufacturing, material science, media, entertainment, automotive, aerospace, defense and energy industries, according to the report. The analysts predict that generative AI will disrupt software coding and could automate up to 70% of the work done by programmers when combined with automation techniques. This technology also can be used for fraud, malware, disinformation and motivation for social unrest.
SEE: 3 ways criminals use artificial intelligence in cybersecurity attacks
This technology is emerging and has reached less than 1% of the target audience. The analysts recommend paying close attention to generative AI because they expect rapid adoption. Companies should prepare to deal with deepfakes, determine initial use cases and think about how synthetically generated data could speed up the analytics development cycle and lower the cost of data acquisition.
This approach to AI is also called augmented intelligence or human-in-the-loop and assumes people and technology are working together. This means certain tasks are completed by an algorithm and some by humans. Also, people can take over a process when the AI has reached the limits of its capabilities. HCAI can help companies manage AI risks and be more ethical and efficient with automation. According to the report, “Many AI vendors have also shifted their positions to the more impactful and responsible HCAI approach.”
HCAI is emerging and has reached 5% to 20% of the target audience. Gartner recommends establishing HCAI as a key principle and creating an AI oversight board to review all AI plans. Companies also should use AI to focus human attention where it is most needed to support digital transformation.
Artificially generated data is one solution to the challenge of obtaining real-world data and labeling it to train AI models. Synthetic data also solves the problem of removing personally identifiable information from live data. This data is cheaper and faster to get and reduces cost and time in machine learning development. The drawbacks to this data are that it can have bias problems, miss natural anomalies or fail to contribute new information to existing data.
This technology is emerging and has reached 1% to 5% of the target audience. Companies should work with specialist vendors while this technology matures and with data scientists to make sure a synthetic data set is statistically valid.
Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays
Veronica Combs is a senior writer at TechRepublic. For more than 10 years, she has covered technology, healthcare, and business strategy. In addition to her writing and editing expertise, she has managed small and large teams at startups and establis…
Tips for managing Linux user accounts
Google Workspace vs. Microsoft 365: A side-by-side analysis
Hiring kit: IT vendor manager
Hiring kit: JavaScript developer

source