ماه: دی 1403

6 most popular AI tools for modern product teams6 most popular AI tools for modern product teams

In the AI space, your team needs a fast-paced system to stay competitive. To create a fast development system, you need some tools. Being in the dev world, you are aware of the top tools, but in this blog post, we will showcase some cool and unique tools in the market that will boost your current workflow. 

Let’s dive right into it. 

1. Iterate AI: AI-powered product analytics implementation

Most of the teams are using tools like Mixpanel, Amplitude, WebEngage, Monoengage, etc. to track user behavior and gather important insights. Iterate AI offers an AI agent that automates the implementation of product analytics, helping the teams to power up their current analytics tracking system. 

Key features: 

Instant tracking plan generation: Iterate AI’s AI agent takes the guesswork out of deciding which events and attributes to track. Simply specify the features and metrics you want to analyze (in natural language), and the AI will generate a comprehensive list of events to monitor.Automated code instrumentation: Implementing event tracking can be a tedious and error-prone process, but Iterate AI’s AI agent simplifies it. The agent automatically inserts the necessary code snippets into your codebase and creates a pull request for your engineers to review.Data flow monitoring: Keeping track of which events are being sent to each analytics tool can be challenging, especially as your product evolves. Iterate AI’s platform monitors the flow of events and alerts you if anything deviates from your tracking plan.

2. Cursor: AI-powered code editor

Cursor is an AI-first code editor built on top of VSCode. It is changing how developers are writing code; it uses AI (obviously) and is a better alternative to GitHub’s Copilot.

Key features: 

AI code completion: Cursor’s AI model understands the context of your code and will provide you with suggestions; it is not just auto-complete; it’s more than that. It also can automatically complete entire functions based on the surrounding code. Copilot++: This advanced feature can take code generation to another level. This will do more powerful edits across different lines of code and will automate the routine coding patterns. More clean and efficient code. Codebase Chat: With this, just chat with your current codebase and interact with natural language to better understand and debug your programs and applications. 

It’s like having your coding teacher available 24/7, right in your editor.

3. Jira: Agile project management tool

This tool is your go-to solution for simplifying software development workflows. Jira helps the team to plan, track, and manage projects in a very effective way. 

Key features: 

Customizable workflows: Tailor workflows to meet different team needs, enhancing flexibility in project management.Integration with development tools: Seamlessly integrates with tools like GitHub and Bitbucket for enhanced collaboration.Robust reporting and analytics: Offers numerous reports (e.g., burndown charts) that provide insights into project progress and team performance.Agile boards: Supports Scrum and Kanban boards that help visualize work in progress and manage tasks efficiently.Mobile application: Allows teams to stay connected and engaged through native mobile apps compatible with Android and iOS.

4. Slack: Team collaboration platform

It is a cloud-based messaging platform that changes the way you communicate with your team, clients, etc. through organized channels and some crazy integration with top tools and services. 

Key features:

Real-time messaging: Facilitates instant communication through channels or direct messages, improving team collaboration.File sharing: Enables easy sharing of documents, images, and other files among team members within conversations.Searchable message history: Provides a searchable archive of conversations for easy reference to past discussions or decisions.Integrations with third-party apps: Supports integration with over 2,600 applications like Google Drive and Trello for streamlined workflows.Huddles: Allows quick audio or video discussions within channels for informal collaboration sessions.

5. Docker: Containerization technology

Docker changes the way applications are deployed, it uses containerization technology that packages your application along with other dependencies into portable containers. 

Key features:

Consistent environments: Ensures uniformity across development, testing, and production environments by using containers that share the same operating system kernel.Easy scaling of applications: Facilitates quick scaling by allowing multiple instances of containers to be spun up or down as needed based on demand.Integration with DevOps tools: Works seamlessly with other similar/related tools like Kubernetes. 

6. Figma: Collaborative design for the modern era

It is turning the design process up by helping teams to collaborate in real-time on a shared canvas. No need to get tangled in emails for dealing with design version conflicts.

Key features:

Cloud-based platform: Accessible via web browsers without needing downloads or installations, allowing easy access from any device.Real-time collaboration: Multiple users can work on the same design file simultaneously, seeing each other’s changes instantly to enhance teamwork.Prototyping tools: Offers powerful features for creating interactive prototypes that simulate user flows for testing purposes.Design systems management: Supports reusable components and styles that promote consistency across projects while streamlining the design process.Developer handoff features: Simplifies the transition from design to development by generating design specs directly within the platform for better communication between designers and developers.

Conclusion:

Having the right tools can make a real difference in how efficiently your team works. Tools like Iterate AI and Cursor are changing the way teams approach product analytics and coding, making processes smoother and faster. On the project management side, Jira helps keep everything organized, while Slack brings teams closer together with easy, real-time communication and integration with other apps your team already uses.

On a more technical front, Docker ensures that your applications run consistently across different environments, making scaling much easier. For design teams, Figma takes collaboration to a new level, allowing real-time editing and feedback, which cuts down on the back-and-forth. By adding these tools to your toolkit, you’re setting your team up to work smarter, faster, and more effectively—giving you a competitive edge.

Only 2.1% avoided generative AI in 2024: Find out whyOnly 2.1% avoided generative AI in 2024: Find out why

Non-users of generative AI tools

This year, only 2.1% of respondents in our Generative AI Report said they don’t use generative AI tools; this represented a decline from last year’s value of 11.8%, and we were interested in knowing what was behind this choice and this drop.

This significant drop suggests a variety of important underlying factors, such as increased awareness and understanding, broader accessibility, proven effectiveness and ROI, peer influence and industry trends, the evolution of technology, and cultural and organizational shifts.

Which tools have you heard of? 

ChatGPTOVMatlabAnsysGeminiLLamaMidjourney

Why have you chosen not to use generative AI tools?

Create our own internally – 33.4%

Lack of interest – 33.3% 

Irrelevance – 33.3%

There was an even distribution of why respondents said they don’t use generative AI.

The choice to develop AI tools internally could mean there’s a preference for using customized solutions tailored to the specific business. This can be driven by worries over control or wanting to keep proprietary systems. 

Respondents also indicated a lack of interest as a reason for not adopting external generative AI tools. Potentially originating from a perceived lack of clear advantage or understanding of how these tools could benefit their specific operations, this decision points to a possible gap in awareness. 

Similarly, there’s a perception that generative AI tools are irrelevant to operations, meaning there could be a disconnect between AI technologies’ offerings and potential users’ needs or understandings. 

Would you consider using generative AI tools in the future?

With all non-users of generative AI stating they’d be willing to use the tools in the future, this points to essential insights about perceptions and the evolution of technologies.

Generative AI tools are being broadly accepted and have a positive outlook, meaning they’ve proven their value and have managed to convince even previously hesitant individuals or companies of their potential benefits. 

There’s also a marked potential for widespread implementation of these tools across more varied sectors and tools as more non-users adopt the technology. We might see generative AI even more extensively integrated into business operations, potentially leading to a new wave of digital transformation.

Generative AI 2024: Key insights & emerging trends
Download the Generative AI 2024 Report for in-depth analysis on top tools, user benefits, and key challenges shaping the future of AI technology.

What tool would you consider using?

The respondents highlighted the tools below as ones they’d consider using:

Task-dependentLLaMa3MatlabOVAnsys

The general consensus about the use of generative AI 

We wanted to know the opinion of the respondents’ companies about generative AI. 

For – 33.3%

Neutral – 66.7%

The majority of respondents (66.6%) indicated that their companies hold a neutral opinion about generative AI, with some (33.3%) also stating that their company has a favorable outlook.

The prevalence of this neutral view could suggest that many companies are still assessing the potential impacts and benefits of generative AI without yet committing fully to its adoption. 

Similarly to last year, there’s an absence of opposition, implying a potential open-minded attitude about generative AI and perhaps even its future adoption.

Do you trust generative AI tools?

We wanted to know how much those who don’t use the technology trust it. Perhaps surprisingly, not everyone who said they’d use the technology in the future also said they trusted it.

Yes – 72.8%

No – 27.2%

The level of trust (72.8%) in generative AI tools remained the same as last year, continuing the trend of not everyone who said they’d use the technology in the future also trusting it.

What impact do you think generative AI tools are having on society?

Mostly positive – 33.3%

Mixed – 66.7%

The majority of respondents (66.7%) see the impact of the tools as mixed, which could suggest a nuanced understanding of the technology’s benefits against its challenges. This view could stem from the awareness that, while driving innovation and efficiency, generative AI has the potential to pose risks in ethics and bias.

How do you envision the role of generative AI evolving in your industry?

All non-users of generative AI view its role as a supplementary tool, which could underline that while having its uses, it’s not vital for core business operations. This could point to an opportunity for developers to educate and demonstrate AI’s broad benefits and capabilities.

What specific security or privacy concerns deter you from using generative AI tools?

Potential misuse of personal/generated data – 33.4%

Lack of transparency in data usage – 33.3%

Other – 33.3%

Potential misuse of personal/generated data and lack of transparency in data usage are equal concerns for non-users of the technology. It could indicate the fear of personal data being mishandled by AI systems or third parties.

How data is used also needs to be fully addressed; transparency over how data is used, handled, and stored by AI systems needs strong policies and regulations on data governance, which need to be properly communicated by AI providers.

Those who answered ‘other’ didn’t specify.

Download the full report to see why and how end users and practitioners are using generative AI tools.



3 AI use cases to elevate your strategy3 AI use cases to elevate your strategy

This article is based on Liza Adams’s brilliant talk at the Product Marketing Summit in Denver.

Product marketers and even CMOs rarely make it to the boardroom. In fact, only 41 members of Fortune 1000 boards are CMOs, and less than 3% of board members have managerial-level marketing experience. 

Why?

Because marketing is often dismissed as tactical – beautiful ads, catchy campaigns, and glossy brochures – while the strategic work that underpins it goes unnoticed. This misconception limits opportunities for marketers to demonstrate the true impact of their expertise on business decisions. 

But here’s the good news: AI is changing the game.

AI has the power to elevate product marketing from a tactical function to a strategic force. It enables us to align executives, refine priorities, and amplify results, making the work of product marketers more visible and valuable at the highest levels. 

Yet mastering AI isn’t a race – it’s a journey. Whether you’re just starting to explore its possibilities or already using it to shape strategy, it’s important to embrace where you are and keep learning.

In this article, I’ll show how AI can help you step into a more strategic role by focusing on three key use cases: 

Segmentation and targetingCompetitive analysisThought leadership 

These examples will demonstrate how AI can go beyond creating content to drive strategic decision-making and deliver real impact. 

Let’s dive in.

AI use case #1: Segmentation and targeting

Our first use case comes from a real scenario where I acted as a fractional CMO. The company was what I like to call a “COVID darling” – it experienced rapid growth during the pandemic; however, post-COVID, it struggled to sustain that growth. 

The executive team’s instinct was to expand their market and target more segments. My response? Don’t go broad – go deep.

Instead of spreading resources thinly across multiple segments, I encouraged the team to focus on two or three key segments. The goal was to understand these customers so thoroughly that we could become the best fit for their unique needs. Broad, shallow targeting wouldn’t deliver the value these customers required.

Here’s where the challenge got interesting. Each executive had their own idea about which segment to prioritize:

The CEO wanted to target healthcare, citing its large market size.The CFO pushed for manufacturing, pointing to its high growth rate.The CPO advocated for retail, aligning with the product roadmap.

The truth is, they were all right – from their individual perspectives. So, the product marketing team and I developed a framework to align these viewpoints and make an informed decision.

We identified evaluation criteria for analyzing each segment, including factors like market size, growth potential, competitive intensity, number of reference customers, and partner strength. Then, we built a heatmap to visually compare how each segment performed against these criteria.

This heatmap became a game-changer. It allowed the executive team to see, at a glance, how each segment stacked up. This data-driven approach shifted the conversation from subjective opinions to objective insights, making it clear which segments offered the most strategic opportunity.

By narrowing the focus and targeting the right segments, the company could allocate resources effectively, align their teams, and maximize their market fit – rather than chasing opportunities that stretched them too thin.

The challenge of gathering data

Before I dive into how we used AI to create a market heatmap, it’s important to acknowledge the most challenging part of the process: data collection and curation

While the conversation with ChatGPT took about three hours, gathering and organizing the necessary data took two to three weeks. This stage was critical because feeding AI accurate, well-structured data is the foundation for meaningful insights.

Here’s a breakdown of the types of data we gathered and the sources we used:

Market size and growth: Pulled from analyst reports, including Gartner, to estimate total addressable markets (TAMs) and growth trends.Competitive intensity: Sourced from customer review platforms like G2 and Capterra to understand how competitors were performing in various categories.Win rates: Derived from our CRM (in this case, HubSpot), including metrics on win-loss ratios.Product roadmap alignment: Compiled in a Google Doc to compare customer needs across segments with our current and planned product offerings.Partner strength: Extracted from a database tracking partner leads, conversions, and overall performance.Customer references: Assessed from a reference database to evaluate the strength and quantity of reference customers in each segment.

This process involved pulling data from disparate systems, formatting it consistently, and redacting sensitive information to maintain confidentiality. Only after this groundwork was done did we begin leveraging AI.

How we used ChatGPT to create our segment targeting heatmap

Once the data was ready, we uploaded it into ChatGPT in spreadsheet format and began prompting it for analysis. Here’s a simplified walkthrough of how we approached the first two rows of our heatmap – market size and growth – using AI:

Initial prompt: “You are an expert market researcher and analyst in the supply chain management space. Please review the attached Excel sheet, analyze it, and provide a summary of your key takeaways. I will provide further instructions after that.”
ChatGPT’s initial response included basic insights, like identifying the verticals with the highest growth rates and highlighting steady growth areas.Follow-up prompt: “Please create a table with two rows: one showing the 2025 market size and another showing the growth rate you calculated. Please order the verticals as manufacturing, healthcare, energy, food, and retail.”
This prompt resulted in a clear, organized table, allowing us to visualize and compare the market data.Heatmap creation: “Turn the table into a single heatmap reflecting forced rankings for market size and growth rate. Assign a score of 5 to the largest market size and highest growth rate, and a score of 1 to the smallest and lowest.”
The output was a color-coded heatmap that visually represented each segment’s market size and growth potential, making it easy to prioritize opportunities.

By repeating this process for the remaining rows – competitive intensity, win rates, partner strength, and customer references – we built a comprehensive heatmap that showed the most valuable segments to target.

Presenting the analysis to the executive team 

Next, it was time to present the findings to the executive team. It’s important to note that this analysis was just a starting point – a framework to guide discussions and foster a 360-degree view of the market opportunities. 

Unlike previous conversations where each executive approached the problem from their one-dimensional perspective, this approach introduced eight dimensions of analysis, offering a more holistic view.

With the heatmap in hand, the executive team could now debate and refine the findings collaboratively. Some execs disagreed with certain rankings, so we made some on-the-fly adjustments to the data. 

We also assigned different weights to certain criteria, recognizing that not all of them were equally important. For example, market growth might carry more weight than competitive intensity, depending on the company’s priorities. 

This flexibility allowed us to fine-tune the analysis and reach a consensus. And, within a week, we validated the findings and identified the top two to three market segments to focus on. 

AI and data analytics-driven finance transformationAI and data analytics-driven finance transformation

Just as crude oil fueled the industrial revolution, data drives the engines of the current digital age. This flood of data, underpinned by rapid advancement in AI and data analytics, fundamentally reshapes the finance functions inside organizations.

It’s no longer a back-office number-cruncher; finance has evolved to become a strategic powerhouse for growth, performance optimization, and risk mitigation through the intelligent use of data. However, such transformation needs a strategic roadmap, along with deep knowledge of both technological capabilities and intrinsic peculiarities of financial operations.

Finance transformation has followed the broader technological development in data management and analytics. Initially, Enterprise Resource Planning (ERP) systems like SAP and Oracle integrated finance by dispersing processes, bringing together a centralized repository.

This paved the way for data warehouse-driven Business Intelligence (BI) dashboards for fast insights into historical trends and performance measures. Today, the emergence of data lakehouses further supports those capabilities of AI and data analytics that have come to mean a new frontier of predictive and prescriptive capabilities, thus setting finance functions on a path not just to understand the “what” and “why” of past performance, but also to anticipate the “what’s next” and proactively shape their organizations’ futures.

Finance leaders have always been adept at navigating complex financial landscapes. However, “know-how” isn’t enough; we need to “know now.”

In other words, AI and data analytics are no longer optional extracts; they’re valuable assets for discovering real-time insights, proactive decision-making, and predictive capabilities. 

I’ve led large-scale transformations for major Financial Services institutions and enterprises and this experience has allowed me to witness the real issues that legacy systems cause in terms of agility and hindering strategic decisions.

What organizations need today is a finance team acting as strategic advisors – a group of professionals who can provide insight and foresight in real time to deal with emerging complexities and capitalize on opportunities. This involves a transformation brought about by four key objectives:

1. From transactional to strategic

Finance must shift from a transactional role, which is focused on recording and reporting, toward a proactive partner that contributes to business strategy and value creation. This represents a more basic change in mindset for finance professionals, using data and AI to identify trends, forecast outcomes, and drive strategic investments.

2. Operational excellence

Operational excellence is changing finance functions, not only by bringing down costs but also by releasing new levels of productivity. Robotic Process Automation (RPA) solutions automate much of the manual effort and human error involved in such processes as invoice processing and reconciliation, freeing resources for more strategic programs.

3. Regulatory landscape

Finance functions operate in an increasingly complex regulatory environment. Maintaining compliance with evolving standards like IFRS and GAAP ( or whichever regional standard may be at play, such as Chinese Accounting Standards (CAS) or Indian Accounting Standards (Ind AS) requires truly robust systems and processes to contain risk. 

AI provides a robust solution through the automation of compliance checks, and flagging of potential violations in real-time, including but not limited to lease accounting errors under IFRS 16, and ensures regulatory reporting requirements are met related to generating XBRL reports for filing at the SEC. It would minimize any penalties and reputational losses by making the approach proactive and ensuring the accuracy and transparency of financial reporting.

4. Proactive risk management

Proactive identification and mitigation of financial risks are critical in attaining organizational resilience and ensuring sustainable growth. AI-powered systems can perform continuous monitoring of transactions to identify anomalies and send an early warning on potential fraud or financial misstatement with a timely warning for taking action on time to minimize the potential loss.

Artificial intelligence and data Analytics

AI and data analytics are no longer concepts of the future, but very real and important tools for today’s finance functions. These are the ways whereby the finance function can pursue those above-mentioned objectives.

With a bundle of power from AI and data analytics, finance can take a quantum leap from being a reactive cost center to a proactive strategic partner.

Automation with RPA

RPA is one example of how finance departments have traditionally been doing repetitive rules-based activities. Right from invoice processing to data entry, reconciliation, and many others, companies can reduce manual efforts and errors by up to 50% if RPA is deployed. This would not only release valuable human capital for more strategic tasks in analysis and decision-making, but would also provide more accurate and effective results.

Predictive power of machine learning

ML algorithms are also allowing finance functions to move from historical reporting to predictive capabilities. The ML model can also forecast future trends by analyzing historical data and recognizing patterns, thus enabling appropriate budgeting, financial planning, and resource allocation. This, in turn, will allow organizations to predict any market changes, optimize resource utilization, and proactively make decisions for growth in profitability.



Unstructured data unlocks insights

NLP and LLMs now empower finance professionals to distill valuable insights from unstructured data sources like contracts, regulatory filings, reports, and news articles. This critical context helps drive decisions and allows the finance teams to deeply understand market dynamics, customer sentiment, and emerging risks. This will eventually enable organizations to make smarter decisions identify opportunities in advance and reduce potential threats proactively.

Data-driven decision making

Advanced analytics dashboards and visualizations are revolutionizing the way finance departments present and consume information. These tools provide instant insights into financial performance and drive data-based decisions throughout all levels of organizations. More importantly, it arms business leaders with exactly what they need to make the right decisions in a minimum amount of time and, therefore, faster to cope with the agile world.

Strengthen risk management

AI-powered systems go hand in hand with underpinning risk management in finance functions. These systems further grant compliance, avoid financial risks, and monitor on an ongoing basis transactions against anomalies and possible fraudulent activities. It is a proactive way of managing risks to help organizations protect their assets, maintain their reputation, and ensure long-term sustainability.

Key use cases: Where AI is making a real impact

The use of AI and data analytics in finance is not some vague concept; it’s a reality being implemented by leading financial institutions to solve real-world challenges and drive tangible business value. 

Here are some key use cases where AI is making a great impact in 2024, supported by recent reports and statistics:  

Invoice processing

AI will revolutionize invoicing processes by automating tasks related to data extraction, invoice matching, and fraud detection. Key information from invoices can be automatically extracted independently of their form, either paper or digital, using AI-driven invoice processing solutions to prevent manual entry and reduce errors.

For example, the AI-driven document processing platform Rossum boasts it can achieve up to 98% accuracy in invoice data extraction, hence effectively boosting efficiency and increasing efficiency related to the processing time. Moreover, AI algorithms can execute the three-way matching of invoices with POs and receipts.

Account receivable

AI is also rebalancing accounts receivable by greatly improving credit scoring, collection, and cash flow forecasting. AI algorithms can analyze huge volumes of data on customer payment history, credit scores, and market trends to predict late payments and indicate customers who are most likely to present risk. Thus, AI enables business owners to take a proactive approach toward credit risk management and optimizing collections strategies.

Accounts payable operations

AI is smoothing the workflow of accounts payable by automating invoice processing, vendor management, and fraud prevention. As mentioned earlier, AI can do automatic data extraction in invoices and their matching, hence saving manual effort and reducing inaccuracies.

Also, AI can perform vendor data analysis to spot potential risks, such as financial instability or compliance issues, for proactive vendor management. Some AI algorithms can also identify various anomalies in invoice data, such as duplicate invoices or suspicious payment requests, to prevent fraudulent activities and compliance with internal controls.

Accounts reconciliation

The use of AI will surely continue to drive efficiency and effectiveness in account reconciliations with automated data matching, identification of discrepancies, and preparation of reconciliation reports. AI algorithms can sift through large volumes of transaction data, identify and clear matching exceptions, and update current statuses of reconciliations. This development reduces manual efforts and, therefore, cuts down on errors, thereby speeding up the process of reconciliation.

Predictive financial forecasting

AI models enhance financial forecasting by analyzing vast amounts of historical data, market trends, and external factors to generate more accurate predictions of revenue, expenses, and cash flow. This allows organizations to anticipate future financial performance, identify potential challenges, and make proactive adjustments to their strategies.

For example, Shell and BP have started implementing machine learning since 2017 to predict changes in energy markets for improved revenue predictions from global trends in energy consumption and pricing.

Automated financial reporting

RPA and AI are automating the generation of financial statements and reports, freeing up finance professionals from tedious manual tasks and reducing the risk of errors. This not only saves time and resources but also ensures greater accuracy and consistency in financial reporting. 

For example, in a recent study, Gartner finds that 58% of organizations are using AI for financial reporting in everything from automated data extraction and reconciliation to variance analysis and anomaly detection. Large language models are being considered to parse legislation and regulations in countries where they operate to ensure each regulation is followed.

Fraud detection and compliance

AI algorithms are playing a crucial role in combating financial crime by monitoring transactions in real time, identifying suspicious patterns, and flagging potential fraud. This helps organizations stay ahead of increasingly sophisticated fraudsters and protect their financial assets. 

report by the Communications Fraud Control Association (CFCA) found that telecommunications fraud in the global telecom industry increased 12% in 2023 equating to an estimated $38.95 billion lost to fraud. AI-powered solutions can help analyze call records for anomaly detection and flag suspicious activities to help the operators reduce such losses and save their revenue streams.

Expense management automation

AI is streamlining expense management by automating expense reporting and reimbursement processes. By extracting data from purchase histories, and receipts, categorizing expenses, and ensuring compliance with company policies, AI reduces errors, saves time, and frees up employees from tedious administrative tasks.

For instance, retail companies like Walmart are into the use of AI mechanisms for automating purchase-ordering and expense management. Companies use AI-driven systems that study customer purchasing habits of the past and predict future needs to optimize products. This will help smoothen their procurement cycles and enhance overall operational effectiveness, saving a lot of manual efforts and reducing procurement costs.

Liquidity and cash management

Predictive analytics is optimizing cash flow forecasting, working capital management, and investment decisions, improving liquidity and financial stability. This allows organizations to better manage their cash flow, optimize working capital, and make informed investment decisions.

For example, big corporations like Hunt Companies have adopted the AI-powered Kyriba platform in their approach to real-time liquidity management. Equipped with integrated APIs, this platform makes working in the treasury easier by providing real-time visibility of cash flows for better capital allocation.

AI assists in liquidity management in all ways a firm might need because it further empowers the ability of a firm to handle the cash reserve with the help of predictive analytics, which can predict and deliver future liquidity needs, especially in times of turmoil in the markets.

In 2023, industries like healthcare also benefited from AI in managing treasury. Health Care Service Corporation employed AI-driven treasury data analytics and reinvented cash flow management to make better-working capital decisions. The move freed up over 1,000 hours of productivity by automating what had been manually-intensive cash management practices. With the power of AI-driven models, the company attained faster and more data-driven decisions that allowed it to manage liquidity through unpredictable financial cycles better.

Implementing AI in Finance

The transformation of the finance function with AI is not a plug-and-play exercise; rather, it calls for a structured approach, commitment toward change, and deep insight into both the technology and nuances of financial operations.

Below is a high-level roadmap highlighting the major phases a company undergoes during its AI finance implementation.

This phase-based approach to AI implementation in finance underlines the strategic process and iterative evolution, as opposed to a typical finance transformation that would center around ERP systems, data warehouses, and business intelligence dashboards.

While those technologies centered around centralization of data and reporting, this roadmap counts on data as a foundation for AI, with much stress on data governanceadvanced analytics, and continuous improvement. It does this by pointing out that AI is a constantly evolving entity, and to keep up, agile adaptation is required, whereas most implementations of traditional systems are more rigid.

Besides that, it covers change management and cross-functional collaboration to ensure seamless integration of AI into the finance function and greater organzational objectives. The approach regards AI in finance as an issue of unlocking predictive and prescriptive capabilities that underpin strategic decision-making and create new value, not just mere automation.  

Building the right foundation: Delivery structure and enterprise architecture

Successfully integrating AI into the finance function is not just about choosing the right technology; rather, it is about building a sound foundation that will lay the base for effective implementation and adoption. This requires a robust delivery structure with a well-defined enterprise architecture supportive of the organization’s ambitions in AI.

Delivery structure:

Well-structured collaboration, knowledge sharing, and effective execution are what organizations require for effective AI adoption.

The Finance Transformation CoE will act as a central hub of AI expertise to provide a crystal clear AI skillset to any organization by guiding and providing governance on best practices and support regarding AI initiatives across organizations. It drives innovation, aligns with overall business strategy, and inculcates a culture of data-driven decision-making.

Of equal importance is the formation of cross-functional teams comprising finance, IT, and operations, among others. In this way, it involves all the stakeholders in the designing and implementation of AI solutions so that the diverse needs within the organization are catered for.

Moreover, the usage of external expertise and technology partnerships would be an effective means of guaranteeing access to those particular skills and solutions that may be beyond reach or even unavailable internally. This, therefore, will help organizations hasten AI implementation and tap into state-of-the-art technologies and best practices.

Finally, this will be change management for successful AI adoption. It will be important to clearly communicate the process along with comprehensive training and sustained support to enable employees to navigate through changes resulting from AI and resolve their anxieties so as to work in a culture of continuous learning and improvement.

Enterprise architecture

A clearly defined enterprise architecture ensures seamless integration of the capabilities of AI across all functions of finance, including financial planning, accounting, treasury, risk management, tax, procurement, revenue management, reconciliation, and strategic finance. It shall guarantee that AI will be utilized to its fullest extent throughout the whole finance function, hence driving maximum impact on efficiency, decision-making, and risk management.

From the technical perspective of this architecture, it needs to have a unified data layer that allows the aggregation of data from various sources into one centralised repository and has quality control in data maintained uniformly.

The integration layer, via APIs and data pipelines, shall provide for efficient communication amongst the numerous systems involved that allow proper flow and, where applicable, analyses of the data. The AI and analytics layer provides the platform for creating and deploying AI models and advanced analytics. The application layer integrates the use of AI into existing financial applications to drive greater automation, insight, and decision-making.

The interface layer, including user-friendly dashboards, mobile apps, and conversational chatbots, will easily bring user access to AI-powered insights and tools by making interactions with data and systems intuitive.

Security measures and compliance frameworks are significant in the security and governance layer for protection of data and responsible, ethical use of AI systems. The infrastructure layer finally consists of scalable cloud and on-premise resources that enable AI workloads to scale systems for handling rapid growth in volume and complexity of financial data.

Governance of Data and AI

The only way organisations can avoid inefficiencies, overlaps, and silos is by ensuring a strong data and AI governance framework. While various departments may have their respective data and AI initiatives, it requires a more centralised governance structure to achieve consistency across the board and to encourage collaboration to maximise value from these efforts.

Finance can lead this cross-functional effort, given their intimate understanding of enterprise data management, regulatory compliance, financial reporting, reconciliation, and balance sheet management. It includes:

By embedding the governance of data and AI into its core functions, finance can encourage cross-functional cohesion, optimise the use of data, and unlock the full value of AI while minimising the risks of AI.

Reaping the rewards: Benefits and outcomes

Where organizations are able to effectively fold AI into their finance functions, they tend to achieve major dividends on efficiency, decision-making, risk management, and strategic positioning.

Automation and efficiency

AI-driven automation cuts out the tedium from routine operations, such as invoice processing, which often yields appreciable efficiency dividends.. This is the equivalent of a real cost saving; in one 2024 Deloitte case study, a global consumer goods company reduced finance operating costs by 15% through the use of AI-driven automation.

Decision making enhanced

Real-time analytics coupled with AI-cognitive-driven insight can drive data-driven decisions toward better financial performance. According to PwC’s Global NextGen survey in 2024, 70% of finance executives believe AI will significantly enhance forecasting accuracy. Indeed, in Unilever, its 2023 annual report showed that the AI-powered demand forecast enhanced the forecast accuracy by 20%, with obvious further effects on better inventory management and reduced waste.

Risk mitigation

AI enables risk management to achieve higher levels in terms of fraud detection, compliance monitoring, and risk assessment. It has also been noted that synthetic identity fraud increased by 47% in 2023, further cementing the requirement for fraud advanced detection systems powered by AI. In response to these issues, the use of AI in risk mitigation is growing.

According to Market.us, the AI In Fraud Detection Market valued $12.1 billion in 2023 is further expected to reach a market size of $108.3 billion by 2033 growing at a CAGR of 24.50%. It has grown rapidly as fraud activities they conduct are evolutionary in nature, and most organisations now need to deploy AI technologies inside fraud prevention solutions that ensure efficiency and accuracy.

Strategic partnership

AI enables finance to become a strategic partner in the business by bringing real-time insights with predictive capability. AI in the real estate sector helps partnerships study market trends, property values, and customer preferences. Predictive analytics can guide developers and investors in forming strategic alliances to invest in high-growth regions or projects. 

In 2024, the global generative AI in real estate market size is calculated at USD 437.65 million, grew to USD 488.06 million in 2025, and is predicted to hit around USD 1,302.12 million by 2034, expanding at a CAGR of 11.52% between 2024 and 2034.

Competitive advantage

Early adoption of AI in finance positions organizations as innovators. Thus, early-bird entities can adapt to changing market conditions and outperform competitors.  According to Gartner, by  2026, 90% of finance functions will deploy at least one AI-enabled technology solution, but less than 10% of functions will see headcount reductions.

These examples represent very concrete benefits of AI in finance from a variety of industries. Early adoption of AI in Finance places organizations in innovator positions, thus the ability to race ahead of others and respond to constantly changing market conditions. It allows the organization to be at an advantage by applying AI in its efficiencies, innovation, and customer service.

Future of finance: AI-driven and data-rich

The future of finance demands a commitment to change, strategic investment, and readiness to embrace transformation. To fully realize the potential of AI-driven finance, organizations need to foster active stakeholder engagement, invest in relevant talent, and continuously monitor and adapt their changes.

Key pillars of this transformation include: securing executive sponsorship, roadmapping, upskilling the finance team, and monitoring key performance indicators (KPIs) that assure desired outcomes from AI initiatives.

Sustainable finance

Beyond mere efficiency gains and enhanced decision-making, the future of finance will run on AI and big data analytics with a new focus on green and sustainable finance. Its practices would be enabled through the use of AI. Moreover, KPMG in its recent article also highlighted that AI is revolutionizing sustainable finance by enabling advanced risk assessment, climate risk modeling, impact investing, sustainable supply chain management, and greenwashing detection, empowering financial institutions to drive positive environmental and social change while maximizing returns.

However, widespread adoption is still hindered by challenges such as data availability, model accuracy, regulatory uncertainty, and varying levels of maturity across different sectors and organizations. This is not just a shift; it is a fundamental redefinition of how financial systems will operate in an increasingly focused on sustainability world.

Monetizing data and AI models

Data and AI models will evolve from being tools into crucial assets and intellectual property (IP) for companies. Accordingly, taking a lead interest in the development and deployment of innovative solutions with AI, new revenue streams for the finance functions could come from offering data-driven products and services, licensing AI models, or even creating data marketplaces. 

The European Commission estimates that the EU’s data economy alone will be worth €829 bn in 2025, accounting for around 6% of regional GDP. In this new realm, the ability to use and commercialize data will become a key differentiator for forward-thinking financial companies.

Enhanced cybersecurity

In the wake of increased reliance on data and AI, the need for strong cybersecurity measures will be very crucial as finance functions. AI applications will be made to detect and prevent any form of cyber threat, which will be very crucial in keeping sensitive financial data and maintaining the integrity of the financial systems.

The market for AI-based cybersecurity is set to grow remarkably – from $24.3 billion in 2023 to nearly $134 billion by 2030, according to Statista. This surge underscores the crucial need for robust security measures as financial companies continue their digital presence and transformation.

While ransomware, in recent years, has attacked many sectors other than the financial sector, some of the major sectors involved are manufacturing, healthcare, and energy.

Transforming cybersecurity with AI
Discover how AI is transforming cybersecurity from both a defensive and adversarial perspective, featuring Palo Alto Networks’ CPO Lee Klarich.

For example, in the Colonial Pipeline attack in 2021, both the financial and operational data were affected. Most businesses had to use AI to focus on spotting anomalies in network activity to quickly identify a breach. In manufacturing, ransomware attacks on supply chains have put companies into adopting AI-driven security systems that can predict vulnerabilities based on data patterns and guarantee protection for financial transactions and intellectual property.

Meanwhile, in healthcare, AI-powered cybersecurity solutions go into the protection of sensitive patient data and financial records after the ransomware attacks that struck hospital systems in 2024 in the US exposed patient and financial information.

Finance self-service agent assistants

Beyond these trends, the rise of AI-driven self-service agent assistants is about to revolutionize the way finance professionals interact with data. The assistants will use NLP to enable users to analyze, process, and manage financial data by using conversational language. Suppose you are asking an AI assistant to “forecast the revenue of next quarter based on current trends” or “identify any anomalies in this month’s expenses.” There is no need for complicated software products or specialized skills. In other words, everyone can use this highly effective tool for financial analysis.

For example, in 2022 Gartner predicted that by 2025 – 70% of white-collar workers will interact with conversational platforms daily. Additionally, the rise of AI has significantly accelerated the growth of self-service analytics, particularly by enhancing user accessibility and insight generation.

Gartner also predicts that by 2025, 80% of data and analytics innovations will be developed with AI and machine learning, further underscoring the pivotal role of AI in self-service analytics tools. These AI-driven assistants will not only respond to questions but will evolve into proactive advisors anticipating needs and automating routine tasks like report composition and account reconciliation.

AI-driven transformation in finance: What’s next?

The finance function is on the verge of a profound transformation, with AI and data analytics becoming essential for driving operational efficiency, revenue assurance, cost optimization, risk mitigation, and strategic growth.

As we look to the future, financial leaders must embrace continuous learning, agile adaptation, and robust data governance to fully leverage the potential of AI. Organizations should prioritize building data-driven cultures, investing in cross-functional teams, and implementing advanced AI systems to maintain a competitive edge.

Emerging technologies such as generative AI, predictive analytics, and self-service AI-driven tools will revolutionize how finance teams operate. The next step of finance transformation will see these technologies more deeply into daily processes, enabling finance functions to evolve from reactive to proactive strategic partners. Furthermore, sustainable finance, enhanced cybersecurity, and the monetization of AI and data assets will become crucial focus areas.

For businesses, now is the time to invest in talent development, strategic AI roadmaps, and continuous monitoring of AI-driven initiatives. Those that do will gain a competitive edge, unlocking new efficiencies, revenue streams, and long-term value. As finance leaders, the opportunity is clear: embrace AI’s potential to redefine finance as a data-rich, AI-driven powerhouse for the future.



DeepMind unveils Veo 2 model: a new era of video generation?DeepMind unveils Veo 2 model: a new era of video generation?

Just one week after OpenAI released Sora, Google DeepMind has released Veo 2, a vid-gen model pushing hard at the current boundaries of AI-powered video creation.

The model is novel in a number of ways – it’s designed to generate high-quality, 1080p resolution videos that can exceed a minute in length, capturing a wide range of cinematic and visual styles.

Key features & example:

Creates realistic video in phenomenal resolution [up to 4k]Understands a variety of camera shots [drone, wide, close-up etc]Better recreates real-world physics & human expression Prompt: A low-angle shot captures a flock of pink flamingos gracefully wading in a lush, tranquil lagoon. The vibrant pink of their plumage contrasts beautifully with the verdant green of the surrounding vegetation and the crystal-clear turquoise water. Sunlight glints off the water’s surface, creating shimmering reflections that dance on the flamingos’ feathers. The birds’ elegant, curved necks are submerged as they walk through the shallow water, their movements creating gentle ripples that spread across the lagoon. The composition emphasizes the serenity and natural beauty of the scene, highlighting the delicate balance of the ecosystem and the inherent grace of these magnificent birds. The soft, diffused light of early morning bathes the entire scene in a warm, ethereal glow.

[You can explore more prompt & video-generation examples on the official DeepMind release here].

Veo 2 vs Sora; DeepMind vs OpenAI

Veo 2 and OpenAI’s Sora are both groundbreaking AI video generation models, each with its own strengths.

While Sora excels in creative storytelling and imaginative scenarios, Veo 2 prioritizes realism and adherence to real-world physics. Veo 2 also offers a higher degree of control over the video generation process, allowing users to specify camera angles, lighting, and other cinematic elements.

Google’s direct comparison tests, utilizing 1,003 prompts from Meta’s MovieGenBench dataset and human evaluation of 720p, eight-second video clips, revealed Veo 2’s superiority over competitors like OpenAI’s Sora Turbo.

Limitations

While Veo 2 has made significant strides, Google acknowledges the ongoing challenges in consistently generating realistic and dynamic videos, especially in complex scenes and motion sequences.

To mitigate potential misuse and ensure transparency, Veo 2’s initial rollout will be limited to select products like VideoFX, YouTube, and Vertex AI. In 2025, the model’s reach will expand to platforms like YouTube Shorts. All AI-generated videos will be marked with an invisible SynthID watermark.

Other releases

DeepMind also unveiled an enhanced Imagen 3 model, delivering brighter, better-composed images with richer details and textures. This model also excels in rendering diverse art styles with greater accuracy. It is currently being rolled out globally to ImageFX.

Additionally, Google Labs has introduced a new “Whisk” experiment that leverages the updated Imagen 3 and Gemini’s visual understanding capabilities. This experiment allows users to prompt with images, showcasing the advancements in AI-powered image generation.

OpenAI launches Sora: AI video generator now publicOpenAI launches Sora: AI video generator now public

OpenAI has made its artificial intelligence video generator, Sora, available to the general public in the US, following an initial limited release to certain artists, filmmakers, and safety testers.

Introduced in February, the tool faced overwhelming demand on its launch day, temporarily halting new sign-ups due to high website traffic.

Changing video creation with text-to-video creation

The text-to-video generator enables the creation of video clips from written prompts. OpenAI’s website showcases an example: a serene depiction of woolly mammoths traversing a desert landscape.

In a recent blog post, OpenAI expressed its aspiration for Sora to foster innovative creativity and narrative expansion through advanced video storytelling.

The company, also behind the widely used ChatGPT, continues to expand its repertoire in generative AI, including voice cloning and integrating its image generator, Dall-E, with ChatGPT.

Supported by Microsoft, OpenAI is now a leading force in the AI sector, with a valuation nearing $160 billion.

Before public access, technology reviewer Marques Brownlee previewed Sora, finding it simultaneously unsettling and impressive. He noted particular prowess in rendering landscapes despite some inaccuracies in physical representation. Early access filmmakers reported occasional odd visual errors.

What you can expect with Sora

Output options. Generate videos up to 20 seconds long in various aspect ratios. The new ‘Turbo’ model speeds up generation times significantly.Web platform. Organize and view your creations, explore prompts from other users, and discover featured content for inspiration.Creative tools. Leverage advanced tools like Remix for scene editing, Storyboard for stitching multiple outputs, Blend, Loop, and Style presets to enhance your creations.Availability. Sora is now accessible to ChatGPT subscribers. For $200/month, the Pro plan unlocks unlimited generations, higher resolution outputs, and watermark removal.Content restrictions. OpenAI is limiting uploads involving real people, minors, or copyrighted materials. Initially, only a select group of users will have permission to upload real people as input.Territorial rollout. Due to regulatory concerns, the rollout will exclude the EU, UK, and other specific regions.

Navigating regulations and controversies

It maintains restricted access in those regions as OpenAI navigates regulatory landscapes, including the UK’s Online Safety Act, the EU’s Digital Services Act, and GDPR.

Controversies have also surfaced, such as a temporary shutdown caused by artists exploiting a loophole to protest against potential negative impacts on their professions. These artists accused OpenAI of glossing over these concerns by leveraging their creativity to enhance the product’s image.

Despite advancements, generative AI technologies like Sora are susceptible to generating erroneous or plagiarized content. This has raised alarms about potential misuse for creating deceptive media, including deepfakes.

OpenAI has committed to taking precautions with Sora, including restrictions on depicting specific individuals and explicit content. These measures aim to mitigate misuse while providing access to subscribers in the US and several other countries, excluding the UK and Europe.

Join us at one of our in-person summits to connect with other AI experts.

Whether you’re based in Europe or North America, you’ll find an event near you to attend.

Register today.

AI Accelerator Institute | Summit calendar
Be part of the AI revolution – join this unmissable community gathering at the only networking, learning, and development conference you need.

How Sudolabs brings practical AI solutions to multiple industriesHow Sudolabs brings practical AI solutions to multiple industries

Can you give us a short intro about Sudolabs? 

Sudolabs started as a digital product agency serving startups, guiding them through idea validation, product discovery, UX/UI design, and software development; we’ve covered the full spectrum.

Currently, we are focusing on enterprise solutions, specializing in AI and Digital Transformation. We help clients identify impactful use cases and ensure the seamless execution of those solutions. 

What inspired Sudolabs to dive into the world of Enterprise AI and Digital Transformation? Can you tell us a bit about the journey so far? 

When we started Sudolabs, we focused on the startup space and had a lot of success there.

Since 2018, we’ve helped start-ups and scale-ups launch over 50 products with a combined valuation of $1.5 billion. These products have raised $200 million across multiple rounds, with backing from top investors like Andreessen Horowitz, SignalFire, and Salesforce Ventures.

Working alongside some of the brightest minds in tech has been incredible.  

The transition to the enterprise segment happened quite naturally. As we grew and brought on some amazing talent, we started working with top Silicon Valley startups, which helped us build a strong reputation. This led to our first enterprise clients from the region reaching out, and we began working on larger, more complex projects.  

Ultimately, we started to explore enterprise verticals in more depth. We have delivered multiple projects, and our clients were quite impressed with our iterative process of building digital

products. We were used to fast-paced environments and quick iterative cycles when it comes to developing products. Our approach was appreciated a lot by our new enterprise clients. 

We also enjoyed the challenges they brought—they pushed us to think bigger and innovate more. Over time, this passion made it clear that diving deeper into Enterprise AI and digital transformation was the right path for us.  

Your transition toward AI and Digital Transformation has surely expanded your playing field — which new verticals have become priorities for you? 

We are, first and foremost, technology pioneers, not confined to any specific vertical. Sudolabs’ expertise lies in understanding emerging technologies, AI trends, and how users interact with these innovations.

Over time, this has naturally led us to have more expertise in certain industries, for instance we have long track-record working with banking institutions, or manufacturing companies. However, our tech expertise is applicable across different segments. 

From finance to manufacturing, how does Sudolabs tailor its approach to meet the unique needs of each industry it serves? 

At Sudolabs, we know that each industry—whether it’s finance, manufacturing, or healthcare— comes with its own unique needs. That’s why we don’t take a one-size-fits-all approach. Instead, we start with a tailored product discovery process that lets us dive deep into each client’s specific challenges and goals before any development begins. 

This discovery phase is crucial because it allows us to understand the ins and outs of their business, identify what’s working well, and spot areas where we can make the biggest impact. For example, in finance, we’re especially mindful of data security and regulatory demands, whereas in manufacturing, we might focus more on optimizing processes through predictive maintenance and AI-powered automation. 

By engaging closely with stakeholders early on, we ensure our solutions are not only technically robust but also genuinely aligned with what matters most to them. This way, we’re creating value that’s practical and meaningful for each industry we serve.

Could you share a behind-the-scenes look at some of the results Sudolabs has achieved for its clients? 

Certainly, we’ve had the chance to work with some amazing clients over the years, but let me share a few of our favorite collaborations. 

For a global steel manufacturer, we developed an AI-driven system to optimize their storage and furnace operations, which reduced their energy and emissions costs by 5%. Hitting ROI within a year, this project has been a great example of how AI can make heavy industry more efficient and sustainable. 

In the customer service, we worked with a major US-based outsourcing company, processing over 1.4 million call transcripts to help automate insights on metrics like handle time and customer satisfaction. By implementing advanced language models, we gave them a powerful tool to pull actionable insights, making their decision-making faster and more data-driven across all their call centers. 

For a major European insurance company, we developed a digital tool for better risk prediction and hazard assessment. It automates geospatial data processing and now supports over 1 million policies across 15 countries, helping more than 700 risk engineers focus on higher-value work and enabling faster, more accurate risk evaluations. It’s also been a big time-saver, freeing up over 20% of underwriting agents’ time. 

And for a marketing client with multiple agencies, we created an AI-powered lead-generation tool that personalizes reports for over 100 marketing use cases. This tool has not only streamlined their MQL process but also boosted engagement by matching customers with the right AI solutions, making lead generation much more efficient. 

AI is a very general term – what technologies do you work with and do you create success for your clients? 

We have expertise working across all key AI domains, as well as data infrastructure setup. In terms of infrastructure, that is a very interesting topic. A lot of people do not realize that in order to utilize

AI tools, you need to set up the right infrastructure to collect and process your data. Without this, you cannot train your Large Language Models or your Machine Learning Models. 

In terms of specific verticals within what people call AI today, we have experience working with LLMs and Gen AI, Traditional Machine Learning and Big Data, and other specialty AI domains, such as Computer Vision. There are multiple use-cases how to utilize all of the aforementioned technologies. 

Numbers speak louder than words—what are some key metrics that illustrate the transformative power of Sudolabs’s work? 

There are individual metrics or KPIs that each our client looks at, however ROI is definitely something what we take into consideration where we try to help our clients define the right business use-cases. For many, the improvements we bring are paying off quickly, and hitting that ROI target shows just how impactful our solutions can be. 

For companies new to AI, adapting can be a big leap. How does Sudolabs guide clients through the transition smoothly? 

We start by getting to know their business inside and out. Our team conducts in-depth assessments to understand their objectives, challenges, and existing processes. This helps us identify where AI can add the most value to their operations. Based on our findings, we develop a tailored AI strategy that aligns with their business goals. This roadmap outlines clear steps, timelines, and measurable milestones, ensuring everyone is on the same page. 

Our experts collaborate closely with our clients’ teams during the implementation phase, ensuring knowledge transfer and addressing any concerns promptly. We design AI systems that grow with our clients’ business. Our solutions are scalable and flexible, allowing them to expand AI capabilities as their needs evolve. Post-implementation, we don’t just walk away. We offer continuous support and maintenance services to ensure the newly adapted technology will be successful. On top of that, we prioritize data security and ethical AI use. 

Beyond delivering solutions, how does Sudolabs engage with communities to foster a broader understanding and excitement around AI? 

We’ve always believed that building a strong community is key, especially in the startup world where having a solid network really matters. So, we started hosting events under the SF Founders Collective, which kicked off in San Francisco and pretty quickly spread to other cities like NYC, Austin, and Salt Lake City. 

These events bring together founders and connect them with top VC experts, so they get practical insights that they can actually use. Lately, we’ve been focusing a lot on AI, diving into real-life applications and making the tech feel more accessible. 

For us, it’s not just about delivering solutions; it’s about creating a community that’s genuinely interested in learning and growing together in this space. We’re building places where founders can connect, share ideas, and hopefully get inspired about where AI can take them next.

In an increasingly competitive field, what do you think truly sets Sudolabs apart as a leader in Enterprise AI? 

What sets us apart as a leader in Enterprise AI Transformation is our personalized and practical approach. We focus on truly understanding each client’s unique challenges and goals, and we tailor our AI solutions to meet those specific needs.

Our team combines deep industry knowledge with advanced AI expertise, ensuring that our strategies are both innovative and grounded in real-world applicability. By emphasizing measurable results and fostering close collaboration, we help our clients successfully navigate AI adoption and achieve meaningful business outcomes. 

If you’re interested in exploring tailored solutions for your business, don’t hesitate to reach out to us: https://sudolabs.com/contact.

AI regulation and hallucinations: Lessons from Generative AI LondonAI regulation and hallucinations: Lessons from Generative AI London

This article brings together highlights from one of our exclusive events, you can catch all the best moments from our summits OnDemand right here.

The Generative AI Summit in London brought together some of the brightest minds in the industry to explore the challenges, breakthroughs, and potential of this game-changing technology. One standout moment was the panel discussion featuring Tom Mason, Chief Technology Officer at Unlikely AI.

We caught up with Tom after his session to dig deeper into the insights from his panel and hear more about the exciting work Unlikely AI is doing. Here are the key highlights from our conversation. Check out the full interview here:

Just looking for a quick read? We compiled the key highlights from Tom’s expert panel below.

1. The complex world of AI regulation

AI regulation is a hot topic, and Tom’s panel tackled it head-on. He shed light on how different regions—like the US, EU, UK, and China—are approaching regulation and what that means for AI innovation.

Finding common ground: Tom emphasized the importance of building trust between AI creators and users. Regulations that strike the right balance can create a solid foundation for scalable and responsible AI adoption.Innovation vs. red tape: While strict regulation can hold back innovation, a lack of oversight risks eroding trust. Tom’s takeaway? It’s all about finding that sweet spot.

Every region has its own approach, and while it’s still early days, the discussions happening now are setting the stage for the future of AI.

2. Solving the hallucination problem in generative AI

One of the biggest challenges for generative AI today is managing “hallucination”—when AI confidently generates inaccurate or outright fabricated information. Tom shared how Unlikely AI is tackling this head-on with a unique approach.

A new kind of architecture: Unlikely AI combines symbolic world models with large language models (LLMs) to create what Tom calls a “compound system.” This setup allows for greater control over hallucination, so the AI can switch between creative and hyper-accurate modes depending on the use case.Grounded in facts: By rooting their models in trusted data sources like Wikipedia, they’re working to ensure accuracy without sacrificing the flexibility generative AI is known for.

Tom explained that this issue is a major barrier to scaling AI solutions in high-trust environments like enterprise applications, but Unlikely AI is on a mission to fix it.

3. Why London is an AI hotspot

As a London-based company, Unlikely AI is putting the city on the generative AI map. Tom highlighted what makes London (and the UK) such a great place for AI innovation.

Pro-innovation vibes: The UK is in a unique position—still light on regulation but heavy on support for experimentation, making it an ideal environment for cutting-edge AI development.Talent galore: With a deep pool of diverse expertise not just in London but across the UK, it’s the perfect breeding ground for cross-functional, collaborative teams.

It’s clear that London isn’t just keeping up with global AI hubs—it’s carving out a space at the forefront.

4. What’s next for Unlikely AI?

Unlikely AI is gearing up for some big moves, and Tom gave us a sneak peek at what’s coming in the next year:

Early-stage validation: In Q1 2025, the team plans to roll out initial Proof-of-Concepts (PoCs) with companies in different sectors to test their technology in high-throughput environments.Full launch: By mid-2025, Unlikely AI is set to bring its solutions to market, offering businesses a way to deploy generative AI with accuracy and confidence.

If your business is looking to leverage the next generation of generative AI, Tom is more than open to having a conversation.

Why community events like this matter

Wrapping things up, Tom spoke passionately about the role of community in driving AI forward. Events like the Generative AI Summit bring together builders, engineers, and executives to share ideas, build connections, and foster innovation.

“It’s all about bringing different skill sets together,” Tom said. “Communities like AI AI are doing an incredible job at creating the spaces we need to collaborate and push the industry forward.”

Improving AI inference performance with hardware acceleratorsImproving AI inference performance with hardware accelerators

As artificial intelligence (AI) continues to permeate various industries, the demand for efficient and powerful AI inference has surged. AI inference, the process of running trained machine learning models to make predictions or decisions, is computationally intensive and often constrained by the performance of the underlying hardware.

Enter hardware accelerators[1]—specialized hardware designed to optimize AI inference, providing significant improvements in flexibility, performance, and iteration time. 

AI inference is the process of applying a trained machine learning model to new data in order to make predictions or decisions. With the growing demand for AI applications across industries, achieving real-time performance during inference is crucial.

Hardware accelerators, such as GPUs (Graphics Processing Units), NPUs (Neural Processing Units), FPGAs (Field-Programmable Gate Arrays), and ASICs (Application-Specific Integrated Circuits) play a significant role in enhancing AI inference performance by providing optimized computational power and parallelism.

This article explores the different types of hardware accelerators, their architecture, and how they can be leveraged to improve AI inference performance. This article explores how hardware accelerators enhance AI inference and the impact they have on modern AI applications.

AI Inference challenges

AI inference typically involves performing a large number of mathematical operations, such as matrix multiplications, which are computationally intensive.

Traditional CPUs, although powerful, are not optimized for these specific types of workloads, leading to inefficiencies in power consumption and speed. As AI models become more complex and data sets larger, the need for specialized hardware to accelerate inference has become apparent.

In AI inference, the balance between compute power and memory bandwidth is critical for optimal performance. Compute Power refers to the processing capability of the hardware, which handles the mathematical operations required by the AI model.

High compute power allows for faster processing of complex models. Memory Bandwidth is the speed at which data can be transferred between memory and the processing units. The computational requirements for training state-of-the-art Convolutional Neural Networks (CNNs) and Transformer models have been growing exponentially.

This trend has fueled the development of AI accelerators designed to boost the peak computational power of hardware. These accelerators are also being developed to address the diverse memory and bandwidth bottlenecks associated with AI workloads, particularly in light of the fact that DRAM memory scaling is lagging behind advancements in compute power as shown in Fig 1. 

Fig 1. Comparing the evolution of # parameters of CNN/Transformer models vs the Single GPU Memory [6]

Fig 2. Computer (FLOPs) vs Memory Bandwidth (/Inference) for different CNN architectures [7]

Fig 2 and 3, shows the computer vs memory bandwidth of the popular AI models[6-7].  Even with high compute power, if memory bandwidth is insufficient, the processors may spend time waiting for data, leading to underutilization of compute resources. Ensuring that memory bandwidth matches the compute demands of the AI model is essential for avoiding bottlenecks and maximizing inference performance.

Fig 3. Compute (FLOPs) vs Arithmetic Intensity (MOPs) for different Transformer (LLM) Models [6]Fig 3. Compute (FLOPs) vs Arithmetic Intensity (MOPs) for different Transformer (LLM) Models [6]

Fig 3. Compute (FLOPs) vs Arithmetic Intensity (MOPs) for different Transformer (LLM) Models [6]

Hardware accelerators

Hardware accelerators, such as GPUs, NPUs, FPGAs, and ASICs, offer a range of deployment options that cater to diverse AI applications. These accelerators can be deployed on-premises, in data centers, or at the edge, providing flexibility to meet specific needs and constraints.

The primary advantage of hardware accelerators is their ability to significantly boost computational performance. GPUs, with their parallel processing capabilities, excel at handling the massive matrix operations typical in AI inference. This parallelism allows for faster processing of large datasets and complex models, reducing the time required to generate predictions.

NPUs, specifically designed for AI workloads, offer even greater performance improvements for certain deep learning tasks. By optimizing the hardware for matrix multiplications and convolutions, NPUs can deliver superior throughput and efficiency compared to general-purpose processors. The architecture of hardware accelerators plays a crucial role in their ability to enhance AI inference performance.

Below, we outline the key architectural features of GPUs, NPUs, FPGAs, and ASICs.

Fig 4. Overview of Hardware architecture for Hardware Accelerators

Graphics Processing Units (GPUs)

GPUs are widely used for AI workloads due to their ability to perform parallel computations efficiently. Unlike CPUs, which are optimized for sequential tasks, GPUs can handle thousands of parallel threads, making them ideal for the matrix and vector operations common in deep learning.

The GPU architecture is designed with thousands of computer units along with scratch memory and control units, enabling highly parallel data processing. Modern GPUs, such as NVIDIA’s A100, are specifically designed for AI workloads, offering features like tensor cores that provide further acceleration for AI operations.

The architecture of a GPU consists of multiple cores, each capable of executing thousands of threads simultaneously. Modern GPUs include specialized cores, such as tensor cores, which are designed specifically for deep learning operations. The memory bandwidth and large register files of GPUs enable efficient handling of large datasets.

Neural Processing Units (NPUs)

NPUs are custom accelerators designed specifically for neural network processing. NPUs are optimized for inference tasks, and they excel at handling large-scale AI models.

The architecture of NPUs contains multiple compute units that allow them to perform matrix multiplications and convolutions more efficiently than GPUs, particularly for models like convolutional neural networks (CNNs).

The architecture of an NPU allows for the efficient execution of matrix multiplications. NPUs include on-chip memory to reduce data transfer times and increase throughput. The array architecture is particularly effective for CNNs and other deep learning models.

Field-Programmable Gate Arrays (FPGAs)

FPGAs offer a unique advantage due to their reconfigurability. They contain millions of programmable gates that can be programmed to optimize specific tasks, such as AI inference, by tailoring the hardware to the specific needs of the application.

This makes FPGAs highly efficient for AI workloads, especially in scenarios where low latency is critical, such as in real-time systems. Companies like Xilinx and Intel offer FPGAs that can be configured to accelerate AI inference.

FPGAs are composed of a grid of configurable logic blocks connected by programmable interconnects. The flexibility of FPGAs allows them to be customized for specific AI workloads, optimizing both performance and power consumption. The ability to reprogram the logic blocks enables FPGAs to adapt to different neural network models as needed.

Application-Specific Integrated Circuits (ASICs)

ASICs are custom-designed chips optimized for a specific application or task. In the context of AI, ASICs are designed to accelerate specific neural network models. An example is Google’s Edge TPU, which is designed for fast and efficient AI inference on edge devices.

The main advantage of ASICs is their efficiency in terms of both power consumption and performance, but they lack the flexibility of FPGAs. ASICs are highly optimized for specific tasks, with a fixed architecture that is designed to maximize efficiency for those tasks.

In the case of AI inference, ASICs are designed to execute specific neural network models with minimal power consumption and maximum speed. This fixed architecture, while highly efficient, lacks the flexibility of FPGAs.

Optimization techniques

To fully leverage the capabilities of various hardware accelerators, different optimization techniques can be applied, each tailored to the strengths of specific hardware types:

Network Architecture Search (NAS): NAS is particularly valuable for customizing neural network architectures to suit specific hardware accelerators. For edge devices, NAS can craft lightweight models that minimize parameters while maximizing performance.

This is especially crucial for NPUs and ASICs, where designing architectures that efficiently utilize hardware resources is essential for optimizing performance and energy efficiency.

Quantization: Quantization involves reducing the precision of a model’s weights and activations, typically from floating-point to fixed-point representations. This technique is highly effective on NPUs, ASICs, and FPGAs, where lower precision computations can drastically improve inference speed and reduce power consumption.

GPUs also benefit from quantization, though the gains may be less pronounced compared to specialized hardware like NPUs and ASICs.

Pruning: Pruning reduces the number of unnecessary weights in a neural network, thereby decreasing the computational load and enabling faster inference. This technique is particularly effective for FPGAs and ASICs, which benefit from reduced model complexity due to their fixed or reconfigurable resources.

Pruning can also be applied to GPUs and NPUs, but the impact is most significant in environments where hardware resources are tightly constrained.

Kernel fusion: Kernel fusion combines multiple operations into a single computational kernel, reducing the overhead of memory access and improving computational efficiency.

This optimization is especially beneficial for GPUs and NPUs, where reducing the number of memory-bound operations can lead to significant performance improvements. Kernel fusion is less applicable to FPGAs and ASICs, where operations are often already highly optimized and customized.

Memory optimization: Optimizing memory access patterns and minimizing memory footprint are critical for maximizing the available bandwidth on hardware accelerators.

For GPUs, efficient memory management is key to improving throughput, particularly in large-scale models. NPUs also benefit from memory optimization, as it allows for more efficient execution of neural networks. FPGAs and ASICs, with their specialized memory hierarchies, require careful memory planning to ensure that data is efficiently accessed and processed, thereby enhancing overall inference performance.

AI model deployment challenges

Deploying AI models on hardware accelerators presents several challenges, particularly in terms of flexibility, iteration time, and performance. Each type of accelerator—GPUs, NPUs, FPGAs, and ASICs—poses unique considerations in these areas.

Fig 5. Different trade-offs AI Model deployment on accelerators

Flexibility: Flexibility is vital for supporting the latest AI models and adapting to evolving frameworks. GPUs, with their general-purpose architecture, offer the highest flexibility among accelerators, making them well-suited for rapidly integrating new models and frameworks.

NPUs, while more specialized, also provide a good balance of flexibility, particularly for neural network tasks, though they may require some adjustments for new operations or model types.

FPGAs are reconfigurable, allowing for custom adjustments to support new models, but this reconfiguration can be complex and time-consuming. ASICs, being custom-designed for specific tasks, offer the least flexibility; any change in model architecture or framework may require a new chip design, which is costly and time-intensive.

The challenge, therefore, lies in ensuring that the deployment environment can integrate advancements without extensive reconfiguration, especially in less flexible hardware like ASICs and FPGAs.

Iteration time: Iteration time, or the speed at which improved AI models can be deployed, is crucial for maintaining the effectiveness of AI systems. GPUs excel in this area due to their compatibility with a wide range of development tools and frameworks, allowing for faster model optimization and deployment.

NPUs also support relatively quick iteration times, especially when deploying models tailored for neural network tasks. However, the application of optimization techniques like quantization and pruning can add complexity, requiring thorough validation to ensure that the model meets performance and key performance indicators (KPIs) post-deployment.

FPGAs, though powerful, often have longer iteration times due to the need for reconfiguration and hardware-specific optimization. ASICs present the greatest challenge in iteration time, as any update or improvement to the model could necessitate redesigning the hardware, which is a slow and expensive process.

Performance: Performance is a key concern when deploying AI models on hardware accelerators. For GPUs, achieving optimal performance involves maximizing hardware resource utilization and efficiently scaling across multiple units, which can be managed relatively easily due to the mature ecosystem of tools available.

NPUs, designed specifically for AI workloads, generally achieve high performance with low latency and high throughput but may require fine-tuning to fully leverage their capabilities. FPGAs, with their customizability, can achieve exceptional performance for specific tasks but often require manual tuning, including custom kernel development and modifications to fully optimize the model.

ASICs deliver the best performance per watt for specific tasks due to their tailored design, but achieving this performance involves significant upfront design work, and any deviation from the initial model can severely impact performance.

These challenges underscore the importance of a carefully considered deployment strategy tailored to the specific hardware accelerator being used. By understanding and addressing the unique flexibility, iteration time, and performance challenges of GPUs, NPUs, FPGAs, and ASICs, organizations can fully leverage the potential of hardware accelerators for AI model deployment.

Performance comparison

When evaluating the performance of various hardware accelerators, it is crucial to consider several key factors, including throughput, latency, power consumption, scalability, and cost. Below is an updated summary of these performance metrics for GPUs, NPUs, FPGAs, and ASICs.

Throughput: GPUs are known for their high throughput, making them ideal for large-scale AI models and batch-processing tasks. NPUs, designed specifically for AI workloads, also offer high throughput but are optimized for neural network processing. FPGAs and ASICs, while capable of high throughput, are typically employed in scenarios where low latency is more critical than raw throughput.

Latency: FPGAs and ASICs generally offer lower latency compared to GPUs and NPUs, making them well-suited for real-time applications. FPGAs are particularly valuable because of their reconfigurability, allowing them to be tailored for low-latency inference tasks. ASICs, being custom-designed for specific tasks, are also optimized for minimal latency.

Power Consumption: In terms of energy efficiency, ASICs are the most power-efficient due to their specialized design. NPUs, which are also designed for AI tasks, offer better energy efficiency compared to general-purpose GPUs.

FPGAs tend to consume more power than ASICs but are generally more power-efficient than GPUs, especially when configured for specific tasks. GPUs, while offering high performance, are typically less power-efficient, but their use can be justified in scenarios where their computational power is necessary.

Scalability: All four types of accelerators offer scalability, but the approaches differ. GPUs are widely used in data centers, where multiple units can be deployed in parallel to manage large-scale AI workloads. NPUs, with their specialized architecture, also scale well in distributed AI environments.

FPGAs provide flexibility and can be reconfigured to scale with the workload, while ASICs, though less flexible, offer scalable solutions when deployed in specific applications. Cloud providers often offer accelerator instances, allowing organizations to dynamically scale their AI infrastructure according to workload requirements.

Cost: ASICs are the most expensive to design and manufacture due to their custom nature, which requires significant upfront investment. FPGAs are more cost-effective for applications that require flexibility and reconfigurability.

GPUs, being general-purpose processors, are typically more affordable for a wide range of AI workloads, making them a popular choice for many applications. NPUs, though specialized, generally fall between GPUs and ASICs in terms of cost, offering a balance of efficiency and affordability depending on the use case.

Future trends

The future of AI inference hardware accelerators is poised for significant advancements, driven by the need for more specialized, efficient, and scalable architectures. Several emerging trends are shaping the development of next-generation hardware accelerators:

Heterogeneous computing: The future of AI hardware will likely involve a heterogeneous approach, combining multiple types of processors—such as CPUs, GPUs, NPUs, FPGAs, and ASICs—into a single system to leverage the strengths of each.

This approach allows for the dynamic allocation of workloads to the most appropriate hardware, optimizing performance, power consumption, and efficiency. Heterogeneous computing architectures are expected to become more prevalent as AI models continue to grow in complexity, requiring diverse hardware capabilities to meet different computational demands.

Innovations in software frameworks and tools will be critical to managing these complex systems and ensuring seamless integration between different types of accelerators.

Neuromorphic computing is an innovative approach inspired by the human brain’s architecture. Neuromorphic chips are designed to mimic the structure and function of biological neural networks, enabling AI inference with remarkably low power consumption and high efficiency.

These chips use spiking neural networks (SNNs), which process information in a way that resembles how neurons communicate in the brain—through spikes of electrical activity.

This approach can dramatically reduce energy usage compared to traditional digital processors, making neuromorphic chips ideal for battery-powered devices and other energy-constrained environments. Companies like Intel (with its Loihi chip) and IBM (with its TrueNorth chip) are leading the development of neuromorphic computing, aiming to bring brain-inspired computing closer to practical applications.

3D chip stacking, also known as 3D integration, is an emerging technology that involves stacking multiple layers of semiconductor chips vertically to create a single, densely packed unit.

This technique allows for greater integration of processing, memory, and communication resources, leading to significant improvements in performance, power efficiency, and form factor.

By reducing the distance that data needs to travel between different parts of the chip, 3D stacking can greatly reduce latency and increase bandwidth, making it a promising solution for AI inference tasks that require high throughput and low latency. The technology also enables more compact designs, which are essential for advanced AI applications in portable devices and edge computing.

Edge AI refers to the deployment of AI models directly on devices at the edge of the network, rather than relying on centralized cloud computing. As the demand for real-time processing in IoT devices, autonomous vehicles, and mobile applications continues to grow, edge AI is becoming increasingly important.

Specialized accelerators like Google’s Edge TPU are designed specifically for low-power AI inference on edge devices, enabling fast and efficient processing close to where data is generated.

These accelerators are optimized for tasks such as image recognition, natural language processing, and sensor data analysis, allowing for real-time AI applications without the need for constant connectivity to the cloud. The growth of edge AI is also driving innovations in energy-efficient hardware design, making it possible to deploy powerful AI capabilities in small, power-constrained devices .

Quantum computing for AI: Although still in its early stages, quantum computing holds the potential to revolutionize AI inference by leveraging quantum mechanics to perform computations at unprecedented speeds.

Quantum computers could solve certain types of problems much faster than classical computers, including those involving optimization, search, and sampling, which are common in AI.

While quantum hardware is not yet ready for widespread AI inference tasks, ongoing research and development suggest that quantum accelerators could eventually complement traditional hardware by handling specific, highly complex AI computations that are beyond the reach of current digital systems.

These trends indicate that the future of AI inference hardware will be marked by increasingly specialized and efficient architectures, tailored to meet the growing demands of AI applications across various domains.

By embracing these emerging technologies, the industry will be able to push the boundaries of what is possible in AI, driving new innovations and unlocking new possibilities for real-time, energy-efficient AI processing.

Conclusion

Hardware accelerators are revolutionizing AI inference by enhancing flexibility, performance, and iteration time. Their versatile deployment options, adaptability to different workloads, and future-proofing capabilities make them indispensable in modern AI infrastructure.

By delivering accelerated computation, improved energy efficiency, and scalability, hardware accelerators ensure that AI applications can meet the demands of today’s data-intensive and real-time environments. Furthermore, by reducing iteration time, they enable faster model development, real-time inference, and rapid prototyping, driving innovation and competitiveness in the AI landscape.

As AI continues to evolve, the role of hardware accelerators will only become more pivotal, unlocking new possibilities and transforming industries across the board. Hardware accelerators are essential for improving AI inference performance, enabling faster and more efficient processing of complex models.

By understanding the capabilities and limitations of different types of accelerators, such as GPUs, NPUs, FPGAs, and ASICs, developers can choose the right hardware for their specific AI applications. As the field continues to evolve, we can expect to see further innovations in accelerator technology, driving the next wave of AI advancements.

References

[1] Mittal, S. (2020). “A Survey on Accelerator Architectures for Deep Neural Networks.” Journal of Systems Architecture, 99, 101635. DOI: 10.1016/j.sysarc.2019.101635.

[2] Li, C., et al. (2016). “FPGA Acceleration of Recurrent Neural Network Based Language Model.” *ACM Transactions on Reconfigurable Technology and Systems (TRETS)*.

[3] NVIDIA Corporation. (2020). “NVIDIA A100 Tensor Core GPU Architecture.”

[4] Xilinx Inc. (2019). “Versal AI Core Series: Datasheet.”

[5] Sze, V., et al. (2017). “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proceedings of the IEEE.

[6] Gholami, A., Yao, Z., Kim, S., Hooper, C., Mahoney, M. W., and Keutzer, K. Ai and memory wall. IEEE Micro, pp. 1–5, 2024.

[7] Dwith Chenna, Evolution of Convolutional Neural Network(CNN): Compute vs Memory bandwidth for Edge AI, IEEE FeedForward Magazine 2(3), 2023, pp. 3-13.

Wondering what aspects of hardware are more important for computer vision?

Have a read below:

5 components of computer vision hardware you need to know
In this article, we cover a few components of hardware you need to know to work with computer vision.

What does the EU competitiveness report say about developments in AI and Competition Law?What does the EU competitiveness report say about developments in AI and Competition Law?

Artificial Intelligence (AI) and the Law sit at a critical intersection as regulators seek to identify the most appropriate long-term solution to a technology bound by the pace problem.

The European Union (EU) Competitiveness Report published back in September 20241 highlights considerations the EU bloc should look at in their forthcoming budget to ensure that excessive regulation does not impede an Artificial Intelligence-driven future.

What is the EU Competitiveness Report?

The EU Competitiveness Report was published by Mario Draghi on 9 September 20241 outlining how stagnant economic growth and excessive red tape could threaten innovation, Europe’s prosperity, and social welfare.

Draghi’s report recommends sectoral and horizontal policies to ensure the bloc is competitive in the future alongside the United States and China. To achieve this, an injection of €750-800 billion1 from a mixture of public and private investment is recommended (or 5 percent of the EU’s total Gross Domestic Product (GDP)), with €450 billion1 allocated to the energy transition. In addition to the required investment, the report has additionally recommended reforms in Competition Law to allow for mergers of European corporations, especially on the back of the EU’s decision to block the merger between Siemens and Alstom back in 2019.2

How the recommendations in the report will be actioned in the long run will not solely be determined from Draghi’s presentation to the informal European Council but equally tested when President-elect Donald Trump is sworn into office on the 20th January 2025. Additionally, the negotiations on the upcoming multi-annual financial framework (MFF) that will shape the EU budget for the period 2028 – 2034 will be an additional hurdle in determining the actionability of the report, with a first draft expected in 2025. The foundational negotiations and, ultimately, the budget size and expenditure will determine if the report has laid the foundations for a more ambitious EU.

Competition Law as a tool for promoting AI innovation in the USA
USA leads in AI with the National AI Initiative Act and AI Bill of Rights, ensuring secure and ethical development.

Technology innovation in Europe

With increasing global pressure to dominate the AI landscape whilst simultaneously working to improve understanding of AI ethics, there is a pressing need for increased investment along with Research and Development (R&D) to cope with the heightened computational demand AI is bringing: an area in which Europe is falling behind in.

Generally speaking, the EU’s industrial model is highly diversified when it comes to technology: it is more specialized in established technologies but weaker in both software and computer services. Taking R&D expenditure in Europe compared to the market leaders in software and internet, for example, EU firms represent only 7% compared to 71% for the US and 15% for China1. With technology, hardware, and equipment, again, the EU trails, accounting for only 12% of R&D expenditure compared to 40% in the US and 19% in China1. 

Despite Europe lagging behind in R&D, the bloc, on a positive note, has a stronghold in high-performance computing (HPC). 2018 saw the launch of the Euro-HPC joint undertaking, where the creation of large public infrastructure across six member states allowed for an increase in computing capacity1. Additionally, with plans to launch two exascale computers in the future, these new systems, alongside the AI Innovation package3, will open up HPC capacity to AI startups, an important step in helping companies scale their AI systems.

Among the innovation agenda outlined above, core legislation surrounding the EU’s digital model is an important pillar to ensure fairness and contestability within the digital sector. The Digital Markets Act4, for example, sets out obligations gatekeepers – large digital platforms providing core platform services such as search engines – must follow. As Frontier AI continues to develop, there is bound to be increased resistance between the EU and US companies as they more deeply embed Artificial Intelligence into their software with the aim of marketing it to as many consumers as possible.

What does the roadmap say about AI?

According to the Competitiveness report, only 11% of EU companies are adopting AI (compared to a target of 75% by 2030)1, and when it comes to foundation models developed since 2017, 73% of these are from the US and 15% are from China1.

A reason why Europe lacks competitiveness in this space is that it doesn’t have the availability of venture capital and lacks cloud hyperscalers as the US does, for example, through partnerships such as OpenAI and Microsoft. This is additionally exacerbated by the availability of venture capital funding. In 2023, for example, only $8 billion in venture capital was made in the EU compared to $68 billion in the USA and $15 billion in China1, respectively. Combining this with the fact that Mistral and Aleph Alpha – two companies building Generative AI models – require significant investment to become competitive against the EU players, they have no choice but to opt for funding overseas.

Back in March 2024, the EU’s AI Act5 was passed: regulation involving the categorisation of AI systems regardless of context pigeonholed into differing risk levels. The Act’s effectiveness of enforcement and impact, however, is unlikely to be seen until 2026, when the transition period and provisions for high-risk systems come into effect. With AI embedded cross-platform, managing competition across the smaller and larger players will be a delicate balancing act, and with AI showing no signs of slowing down, the future of AI and its associated regulation will bring a combination of excitement and controversy.  

Regulating artificial intelligence: The bigger picture
The article discusses the challenges of AI regulation, economic impact, and governance, with a focus on the UK’s evolving legal approach.

What does it mean for AI and associated Law in the future?

Addressing the challenges around the availability of R&D funding will be one of numerous steps to ensuring the EU bloc is competitive in the AI space. However, with competition from the USA and China only going to strengthen, it will not only be funding that is required but a look at competition law reform.

High inflation environments could give rise to tacit collusion – a form of collusive behaviour as a result of firms coordinating their actions without explicitly reaching an agreement – a gap not easily filled, especially when there are no current (and good) EU-level tools to deal with the practice.

Furthermore, consumer inertia resulting from brand loyalty, switching costs, and habit formation may result in undisciplined competition due to consumers preference for a more cost-effective and technologically streamlined option. A competition enforcer wants to ensure consumers are not exploited, but at the same time, similar to tackling the high inflation aspect, there is no specific tool for them to use.

While the EU’s AI Act5 is a commendable step in managing what is a continually difficult technology to contend with, Europe lagging behind holistically in AI development may result in EU companies having their market share lessened by their non-EU counterparts. Competition benefitting EU consumers on many occasions comes from trade resulting in regional and global markets, so if competition reform is on the table to boost enterprise in the bloc, minimising anticompetitiveness and harm from illegal subsidies to foreign firms must be looked at closely.

Bibliography

1Mario Draghi, The future of European Competitiveness Part B: in-depth analysis and  recommendations, The European Commission, 2024

2 Siemens/Alstom (Case IV/M.8677) Commission Decision 139/2004/EEC [2019] OJ C 300/07

3Joint Research Centre, Science for Policy brief, Harmonised Standards for the European AI Act, JRC 139430.

4Regulation (EU) 2022/1925 of the European Parliament and of the Council on contestable and fair markets in the digital sector [2022] OJ L265/1 (hereafter: DMA).

5European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. COM/2021/206 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

Here at AI Accelerator Institute, we’re always looking to bring you more exciting events each year.

And 2035 is no different; why not take a look at what we have planned for you?

AI Accelerator Institute | Summit calendar
Be part of the AI revolution – join this unmissable community gathering at the only networking, learning, and development conference you need.