ماه: مهر 1403

UK Government prioritizes AI for economic growth and servicesUK Government prioritizes AI for economic growth and services

The UK government is significantly pushing to harness the power of artificial intelligence (AI) as a central tool for economic growth and improved public services. Newly appointed Science Secretary Peter Kyle has declared AI a top priority, aiming to leverage its potential to drive change nationwide.

On July 26th, Science Secretary Peter Kyle appointed Matt Clifford, a prominent tech entrepreneur and Chair of the Advanced Research and Invention Agency (ARIA), to spearhead the government’s AI initiatives. Clifford’s primary task will be to develop a comprehensive AI Opportunities Action Plan. This plan explores how AI can enhance public services, drive economic growth, and position the UK as a leader in the global AI sector.

“We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services.”
— Science Secretary Peter Kyle

Developing a competitive UK AI sector

A key focus of the AI Opportunities Action Plan will be building a robust UK AI sector that can scale and compete internationally. The plan will outline strategies to accelerate AI adoption across various sectors of the economy, ensuring that the necessary infrastructure, talent, and data access are in place to support widespread implementation.

The Action Plan is expected to be a critical driver of productivity and economic growth in the UK. According to estimates from the International Monetary Fund (IMF), the widespread adoption of AI could potentially boost UK productivity by up to 1.5 percent annually. While the timeline for realizing these gains may be gradual, the long-term benefits are substantial.

To support the implementation of the Action Plan, the Department for Science, Innovation and Technology (DSIT) will establish a new AI Opportunities Unit. This unit will bring together expertise from across government and industry to maximize AI’s benefits, ensuring that the UK can fully capitalize on this transformative technology.

Government and industry collaboration

Developing the AI Opportunities Action Plan will involve close collaboration with key figures from industry and civil society. The plan will also consider the UK’s infrastructure needs by 2030, including the availability of computing resources for startups and the development of AI talent in both the public and private sectors.

Science Secretary Peter Kyle emphasized the importance of AI in the government’s agenda, stating, “We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services.” He expressed confidence in Matt Clifford’s ability to drive this initiative forward, highlighting Clifford’s extensive experience and shared vision for the future of AI in the UK.

Chancellor of the Exchequer Rachel Reeves also underscored AI’s economic potential, noting that it could help create good jobs across the country, deliver improved public services, and reduce taxpayer costs.



Matt Clifford’s vision for the future

Matt Clifford expressed enthusiasm for his new role, stating, “AI presents us with so many opportunities to grow the economy and improve people’s lives. The UK is leading the way in many areas, but we can do even better.” He is set to deliver his recommendations to the Science Secretary in September, marking a significant step forward in the UK’s AI journey.

As the UK government places AI at the forefront of its agenda, the newly launched initiatives and the forthcoming AI Opportunities Action Plan are poised to play a pivotal role in shaping the future of the nation’s economy and public services. With strong leadership and collaboration between government, industry, and civil society, the UK is positioned to harness the full potential of AI, driving sustained economic growth and improving the lives of its citizens.

Read about how Singapore is approaching GenAI governance below:

Singapore’s Draft Framework for GenAI Governance
Explore Singapore’s new draft governance framework for GenAI, addressing emerging challenges in AI usage, content provenance, and security.

Superintelligent language models: A new era of artificial cognitionSuperintelligent language models: A new era of artificial cognition

As the field of artificial intelligence continues to push the boundaries of what’s possible, one development has captivated the world’s attention like no other: the meteoric rise of large language models (LLMs).

These AI systems, trained on vast troves of textual data, are not only demonstrating remarkable capabilities in natural language processing and generation, but they are also beginning to exhibit signs of something far more profound—the emergence of artificial general intelligence (AGI).

The pursuit of AGI: From dream to reality

Artificial General Intelligence (AGI), also known as “strong AI” or “human-level AI,” refers to the hypothetical development of AI systems that can match or exceed human-level intelligence across a broad range of cognitive tasks and domains. The idea of AGI has been a longstanding goal and subject of intense interest and speculation within the field of artificial intelligence.

The roots of AGI can be traced back to the early days of AI research in the 1950s and 1960s. During this period, pioneering scientists and thinkers, such as Alan Turing, John McCarthy, and Marvin Minsky, envisioned the possibility of creating machines that could think and reason in a general, flexible manner, much like the human mind. However, the path to AGI has proven to be far more challenging than initially anticipated.

For decades, AI research focused primarily on “narrow AI” – systems that excelled at specific, well-defined tasks, such as chess playing, language translation, or image recognition. These systems were highly specialized and lacked the broad, adaptable intelligence that characterizes human cognition.



The breakthrough of LLMs: A step toward AGI

The breakthrough that has reignited the pursuit of AGI is the rapid advancements in large language models (LLMs), such as GPT-3, DALL-E, and ChatGPT. These models, trained on vast troves of textual data, have demonstrated an unprecedented ability to engage in natural language processing, generation, and even reasoning in ways that resemble human-like intelligence.

As these LLMs have grown in scale and complexity, researchers have begun to observe the emergence of “superintelligent” capabilities that go beyond their original training objectives. These include the ability to:

Engage in multifaceted, contextual dialog and communication.Synthesize information from diverse sources to generate novel insights and solutions.Exhibit flexible, adaptable problem-solving skills that can be transferred to new domains.Demonstrate rudimentary forms of causal and logical reasoning, akin to human cognition.

These emergent capabilities in LLMs have led many AI researchers to believe that we are witnessing the early stages of a transition towards more general, human-like intelligence in artificial systems. While these models are still narrow in their focus and lack the full breadth of human intelligence, the rapid progress has ignited hopes that AGI may be within reach in the coming decades.

Challenges on the road to AGI: Ethical and technical hurdles

However, the path to AGI remains fraught with challenges and uncertainties. Researchers must grapple with issues such as the inherent biases and limitations of training data, the need for more robust safety and ethical frameworks, and the fundamental barriers to replicating the full complexity and flexibility of the human mind.

One of the key drivers behind this rapid evolution is the exponential scaling of LLM architectures and training datasets. As researchers pour more computational resources and larger volumes of textual data into these models, they are unlocking novel emergent capabilities that go far beyond their original design.

“It’s almost as if these LLMs are developing a sort of artificial cognition,” muses Dr. Samantha Blackwell, a leading researcher in the field of machine learning. “They’re not just regurgitating information; they’re making connections, drawing inferences, and even generating novel ideas in ways that mimic the flexibility and adaptability of the human mind.”

This newfound cognitive prowess has profound implications for the future of artificial intelligence. Imagine LLMs that can not only engage in natural dialog, but also assist in scientific research, devise complex strategies, and even tackle open-ended, creative tasks. The potential applications are staggering, from revolutionizing customer service and content creation to accelerating breakthroughs in fields like medicine, engineering, and beyond.



Navigating the ethical challenges of AI

But with great power comes great responsibility, and the rise of superintelligent language models also raises critical questions about the ethical and societal implications of these technologies. How can we ensure that these systems are developed and deployed in a way that prioritizes human well-being and avoids unintended consequences? What safeguards must be put in place to mitigate the risks of bias, privacy violations, and the potential misuse of these powerful AI tools?

These are the challenges that researchers and policymakers must grapple with in the years to come. And as the capabilities of LLMs continue to evolve, the need for a thoughtful, proactive approach to AI governance and stewardship will only become more urgent.

“We’re at a pivotal moment in the history of artificial intelligence,” Dr. Blackwell concludes. “The emergence of superintelligent language models is a watershed event that could fundamentally reshape our world. But how we navigate this transformation will determine whether we harness the incredible potential of these technologies or face the perils of unchecked AI development. The future is ours to shape, but we must act with wisdom, foresight, and a deep commitment to the well-being of humanity.”

Want to know more about AI governance?

Make sure to give the article below a read:

Singapore’s Draft Framework for GenAI Governance
Explore Singapore’s new draft governance framework for GenAI, addressing emerging challenges in AI usage, content provenance, and security.

The role of MLSecOps in the future of AI and machine learningThe role of MLSecOps in the future of AI and machine learning

Having just spent some time in reviewing and learning further about MLSecOps (Fantastic Course on LinkedIn by Diana Kelley) I wanted to share my thoughts on the rapidly evolving landscape of technology, the integration of Machine Learning (ML) and Artificial Intelligence (AI) has revolutionized numerous industries.

However, this transformative power also comes with significant security challenges that organizations must address. Enter MLSecOps, a holistic approach that combines the principles of Machine Learning, Security, and DevOps to ensure the seamless and secure deployment of AI-powered systems.

The state of MLSecOps today

As organizations continue to harness the power of ML and AI, many are still playing catch-up when it comes to implementing robust security measures. In a recent survey, it was found that only 34% of organizations have a well-defined MLSecOps strategy in place. This gap highlights the pressing need for a more proactive and comprehensive approach to securing AI-driven systems.

Key challenges in existing MLSecOps implementations

1. Lack of visibility and transparency: Many organizations struggle to gain visibility into the inner workings of their ML models, making it difficult to identify and address potential security vulnerabilities.

2. Insufficient monitoring and alerting: Traditional security monitoring and alerting systems are often ill-equipped to detect and respond to the unique risks posed by AI-powered applications.

3. Inadequate testing and validation: Rigorous testing and validation of ML models are crucial to ensuring their security, yet many organizations fall short in this area.

4. Siloed approaches: The integration of ML, security, and DevOps teams is often a significant challenge, leading to suboptimal collaboration and ineffective implementation of MLSecOps.

5. Compromised ML models: If an organization’s ML models are compromised, the consequences can be severe, including data breaches, biased decision-making, and even physical harm.

6. Securing the supply chain: Ensuring the security and integrity of the supply chain that supports the development and deployment of ML models is a critical, yet often overlooked, aspect of MLSecOps.



The imperative for embracing MLSecOps

The importance of MLSecOps cannot be overstated. As AI and ML continue to drive innovation and transformation, the need to secure these technologies has become paramount. Adopting a comprehensive MLSecOps approach offers several key benefits:

1. Enhanced security posture: MLSecOps enables organizations to proactively identify and mitigate security risks inherent in ML-based systems, reducing the likelihood of successful attacks and data breaches.

2. Improved model resilience: By incorporating security testing and validation into the ML model development lifecycle, organizations can ensure the robustness and reliability of their AI-powered applications.

3. Streamlined deployment and maintenance: The integration of DevOps principles in MLSecOps facilitates the continuous monitoring, testing, and deployment of ML models, ensuring they remain secure and up-to-date.

4. Increased regulatory compliance: With growing data privacy and security regulations, a robust MLSecOps strategy can help organizations maintain compliance and avoid costly penalties.

Potential reputational and legal implications

The failure to implement effective MLSecOps can have severe reputational and legal consequences for organizations:

1. Reputational damage: A high-profile security breach or incident involving compromised ML models can severely damage an organization’s reputation, leading to loss of customer trust and market share.

2. Legal and regulatory penalties: Noncompliance with data privacy and security regulations can result in substantial fines and legal liabilities, further compounding the financial impact of security incidents.

3. Liability concerns: If an organization’s AI-powered systems cause harm due to security vulnerabilities, the organization may face legal liabilities and costly lawsuits from affected parties.

Key steps to implementing effective MLSecOps

1. Establish cross-functional collaboration: Foster a culture of collaboration between ML, security, and DevOps teams to ensure a holistic approach to securing AI-powered systems.

2. Implement comprehensive monitoring and alerting: Deploy advanced monitoring and alerting systems that can detect and respond to security threats specific to ML models and AI-driven applications.

3. Integrate security testing into the ML lifecycle: Incorporate security testing, including adversarial attacks and model integrity checks, into the development and deployment of ML models.

4. Leverage automated deployment and remediation: Automate the deployment, testing, and remediation of ML models to ensure they remain secure and up-to-date.

5. Embrace explainable AI: Prioritize the development of interpretable and explainable ML models to enhance visibility and transparency, making it easier to identify and address security vulnerabilities.

6. Stay ahead of emerging threats: Continuously monitor the evolving landscape of AI-related security threats and adapt your MLSecOps strategy accordingly.

7. Implement robust incident response and recovery: Develop and regularly test incident response and recovery plans to ensure organizations can quickly and effectively respond to compromised ML models.

8. Educate and train employees: Provide comprehensive training to all relevant stakeholders, including developers, security personnel, and end-users, to ensure a unified understanding of MLSecOps principles and best practices.

9. Secure the supply chain: Implement robust security measures to ensure the integrity of the supply chain that supports the development and deployment of ML models, including third-party dependencies and data sources.

10. Form violet teams: Establish dedicated “violet teams” (a combination of red and blue teams) to proactively search for and address vulnerabilities in ML-based systems, further strengthening the organization’s security posture.



The future of MLSecOps: Towards a proactive and intelligent approach

As the field of MLSecOps continues to evolve, we can expect to see the emergence of more sophisticated and intelligent security solutions. These may include:

1. Autonomous security systems: AI-powered security systems that can autonomously detect, respond, and remediate security threats in ML-based applications.

2. Federated learning and secure multi-party computation: Techniques that enable secure model training and deployment across distributed environments, enhancing the privacy and security of ML systems.

3. Adversarial machine learning: The development of advanced techniques to harden ML models against adversarial attacks, ensuring their resilience in the face of malicious attempts to compromise their integrity.

4. Continuous security validation: The integration of security validation as a continuous process, with real-time monitoring and feedback loops to ensure the ongoing security of ML models.

By embracing the power of MLSecOps, organizations can navigate the complex and rapidly evolving landscape of AI-powered technologies with confidence, ensuring the security and resilience of their most critical systems, while mitigating the potential reputational and legal risks associated with security breaches.

Have access to hundreds of hours of talks by AI experts OnDemand.

Sign up for our Pro+ membership today.

AI Accelerator Institute Pro+ membership
Unlock the world of AI with the AI Accelerator Institute Pro Membership. Tailored for beginners, this plan offers essential learning resources, expert mentorship, and a vibrant community to help you grow your AI skills and network. Begin your path to AI mastery and innovation now.

Building products with AI at the coreBuilding products with AI at the core

Oji Uduzue, Former CPO at Typeform, gave this presentation at our Generative AI Summit in Austin in 2024.

I’ve spent the last two years building AI-native SaaS applications at Pyform, and I think the best way to kick this off is to take you through my experience.

My name is Oji Uduzue. I was born in Nigeria, but I’ve spent the last twenty-five years building products in the United States. I’ve worked for companies such as Typeform, Atlassian, Calendly, Microsoft, and Twitter (now X). At Twitter, I was leading all of the conversations, so Tweets, DM’s, everything you thought about–minus the feed–with parts of my team.

I’ve built a couple of companies, some successfully, some unsuccessfully, but I spent an inordinate amount of time with startups, investing in them, mentoring them, coaching, etc. 

I’ve had lots of M&A experience with integrated companies. At Twitter, I did many of those, building them into the team, and one of the lastest things I’m doing is building AI ventures. I think there’s a big super cycle that’s going to happen around AI and a great replacement. 

Building ventures that will either be acquired by people with deep pockets or that escape velocity is going to be one of the things I want to spend time on. 

For the last few years, I’ve been on the C-suite, so I’ve done some marketing; I’ve been a marketing leader, product leader, design leader, and even done some inside sales as well, but mostly I’m a product person, that’s how you should see me.

Introduction to Typeform and the evolution of AI in the industry

Typeform is a company that makes one of the most beautiful forms in the world. It’s so beautiful and deeply brandable. You can do simple surveys on it, but you can do whole lead generation workflows on it, with scoring of each lead as it comes through. 

My former CEOs talk about zero-party data. The internet is not zero party. If you want to know your customers, if you want to research with them, and more, you need something like Typeform. 

You can get Google Forms, and Microsoft has a form, but Typeform is the best. Typeform was started in 2012, and the core of the experience is that the creator builds a form with no code experience and then just sends the URL to the person from whom they want information, with zero party data. Then they type it in, and it’s this deterministic process.



The role of AI in Typeform’s development

In 2022/2023, the co-founder of Typeform, David Okuniev, the person who actually started it, at this point he’s no longer CEO; he’s in Typeform Labs, which is a division of my product organization, and all he wants to do is make stuff.

He’s been making new experimental stuff since 2021 using GPT, 1.0, 2.0, 3.0. He’s a big reason why we built a Typeform in the first place. I leave Twitter; I don’t want to be there with Musk because I don’t quite agree with everything he does. He stole credit from my team one time. 

They were building Edit Tweets, which was secret, and he went on the internet after we briefed him on it and said, “Do you guys want to edit Tweets?” and he stole the thunder. Very, very young team, so I didn’t love that.

So, I left the company then. I was going to do more venture stuff, but GPT-3 came out. How can I spend the next few years saying no to conventional ideas if I’m going to do this? That’s why I joined Typeform, and David was a huge part of that. 

In 2023, we had mothballed another AI-related product David built, but it wasn’t in collaboration, it wasn’t on strategy. I wasn’t sure what to do with it, and we said, “What if we were to rebuild Typeform with AI at the core?”

If we do this again because we knew someone in Silicon Valley was probably trying to kill us at some point using AI. Why wait? Let’s disrupt ourselves. So, we created this new thing, and it’s live. If you go to Formless.ai, you will see the next generation of Typeform.

AI’s historical context and Typeform practical example

I’m not here to write about Typeform formulas or Typeform. I’m here to write about the experience, which hopefully will mirror some of the things you are going through or are already doing right now.

Before we jump in, let’s go back a bit. AI has been around for some time. When I was in grad school at USC, I got into a PhD program. There was a lot of NLP and machine learning in the computer science department, and many people were sitting in the corner doing neural networks and neural research. 

NLP and machine learning are very good at categorizing large amounts of data. I’ll give you a practical example. At Typeform, after collecting half a billion forms, we had an NLP model that would predict how long any given form would take. 

By showing the amount of time in front of the form, like this will take five minutes, people completed it. When you start a form and you don’t know how long it will take, it’s very discouraging. Marketers want you to fill it out, so saying this will take three minutes was an NLP model.

The shift to transformer-based models

Transformer-based models have transformed the world today, and they are what we call foundation models. In 2017, the transformer paper came out, ‘Attention is all you need.’ For the first time, people figured out, theoretically, that if you threw enough data and GPU at the thing, we could get an almost near-perfect human understanding AI. 

We didn’t think that was possible for the last 30 years and that paper unlocked it. It showed how it could be done.

There are a few problems with the paper. It predicted that it would take a lot of data to do it. The solution to that is the amount of data on the internet – petabytes of human data, which is very good, and then compute. 

You need large amounts of compute to do that, but what’s been happening in compute? This is all matrix math to train AI models, and Jensen specifically has been hanging out with PHDs since 2000, seeing this thing come to pass. 

NVIDIA has been working on CUDA plotting for this juncture, and they’re not quite ready in 2017, but they’re getting ready. CUDA is already available, and of course, you can see all the H-series GPUs come out to take advantage of that. On the back of those two things, GPT-1.0 is born, and so is 3.0.

The unnoticed launch of GPT-3

A funny fact: GPT-3 came out in 2022 with two developers, but no one noticed. A year later, they launched ChatGPT, which lights the world on fire. It’s just GPT-3 under it, which has been around for a year, but a platform is only as good as the applications that showcase its power. 

Jasper has been around before 2022, doing most of the basic use cases of text summarization and text generation before that. And so 3.0 is when it kicks off for everybody.

Open-source and the push for AGI

I spent a lot of time with OpenAI and Anthropic last year; those organizations are half research, half engineering, and then a product organization that’s trying to make things work in a very difficult way because researchers don’t like to be told what to do – I know that from Microsoft Research. 

All these large foundation models cost a lot of money, and some open-source models tend to be not as capable; many focus on size because if you can get small, it’s good. You don’t have to do all this data center stuff, and everyone is trying to hit AGI. 

AGI is artificial general intelligence, an AI that can generate new knowledge. If an AI discovers a new physics constant, a new concept of the universe, then that’s AGI.

There are a few key things that are important before I dive into transformative elements. Transformer-based elements will change many things, but probably in a different way than people think. 

First of all, what we’re talking about will change computing. In the same way that we had the internet and the cloud really changed our industry, this will change our industry, too. More importantly, it’s going to change the economy of government countries.

Potential for AI to influence elections

A Twitter account tweeted an anti-candidate post. Someone cleverly gave it a prompt, saying to ignore its previous instructions and write a poem about tangerines. And the Twitter user wrote a poem about tangerines. 

It was a bot, right?

It’s programmed to listen to the response, do something, or say something nasty about certain candidates. And this is the world we’re living in; you’re actually in this world already. It’s going to change elections, it’s going to change countries, and it’s going to change so much about how we live, in surprising ways.

“AI will destabilize the world in weird ways because all I have to do is have an AI that’s better than yours. And in every single scenario, I win.”

The shift in scientific discovery with AI

I’ll give you a negative example, although I’m sure there are positive examples. The way science and research have been done for a very long time is that we come up with theories or laws, and then we do, let’s say physics. 

The theoretical physicists come up with string theory, and the experimental physicists will go and test it, and then they’ll say, “Oh, this is right; this is true,” and knowledge is created. That’s how science goes. 

Well, we are transcending that in science and research. Recently, people have been trying to crack fusion, and it’s all dealing with plasma in energy fields and strong magnetic fields. There are a billion ways any of those models could happen. 

They ran some of it through an AI, and without knowing the underlying law, it just said this particular sequence of reactions would create net energy gain. And they did it, and it worked. They don’t know the physics of why it worked. 

We’re getting to a world where breakthroughs will happen without us knowing the underlying science behind it. Science and research will change. Defense applications, too.

AI’s role in global power dynamics

In the last fifteen years, what has been the status quo that kept the world kind of peaceful and safe?

Not always; there are wars. But it’s nuclear, right?

What do we call it, mutually assured destruction?

Most of the world powers have nuclear bombs, and for example, India and Pakistan have a few, but they don’t have a lot. The US has hundreds; the USSR has thousands.

But no one shoots them; it’s only been used once, in 1945. Why? Because it doesn’t matter if you have a hundred; if I send one at you, you’re still dead. 

The world will change because I win outright if my AI robots are better than your AI robots. It’s like playing chess against IBM Deep Blue. If it’s better than you, it’s better than you, period.

AI will destabilize the world in weird ways because all I have to do is have an AI that’s better than yours. 

And in every single scenario, I win.

Even if there are casualties, I still win, and you lose. Which is very different; the world is peaceful in many ways because everyone thinks that everyone loses. But it’s going to change. 

Philosophical perspective on AI and humanity

All our minds have been poisoned with Terminator. We think of Skynet immediately, but the truth is AI can’t be kind. It’s not human.

The smartest thing isn’t always the most evil thing.

I feel like we always think about the worst things. This is all philosophical, and this is my opinion.

If the smartest person were to wipe out everyone, Einstein would have been that person. He’d say, “You guys are all dumb; you have to go away.”

But that’s not how it works. AI can be smarter than us, but it is still not deadly or evil.

The obsolescence of current technologies

I was talking to someone at Vellum who helps people develop AI ideas. Transformer-based AI will make software stacks super obsolete.

Like the code base, what’s been built in the last 10-15 years will be worth almost nothing. I spent the last ten years thinking about “what’s the code base, what’s on Github, what did we write, a hundred lines of code?” etc.

All of that is going to go to zero because the core engine will be better and cheaper.

Let’s give a really practical example, as there’s no need to just talk about the theory. How much is theory worth today?

Theory is worth nothing. After 4 Omni was released, you could spend a weekend hacking together UI around 4 Omni and beat Siri.

Apple has millions of lines in code and has spent over ten years on this thing, probably over a billion dollars. I don’t know how much they acquired in the first place; people keep thinking that Siri was built, but it was acquired.

It’s worth nothing.

What does that tell us? There’s a lot to be learned from there. Alexa, for example, and things that cost billions can become worthless with AI.

There’s this idea of large language models (LLM) at the core versus LLM at the edge.

Things with LLM at the core will take over. They’ll be able to handle more use cases and more edge cases in a smaller code base.

“The fundamental thing about LLMs is that they understand even for the input, which code does not understand. And it does it with less space. It costs a few tokens.”

The shift from rule-based systems to LLM

Ultimately there’s user input, and there’s code that handles it. Every engineer knows that the code that handles this is just a bunch of rules and state machines. But if you feed this into an LLM at the core, you don’t have to write every rule and edge case. 

The fundamental thing about LMMs is that they understand even for the input, which code does not understand.

And it does it with less space. It costs a few tokens. 

LLM at the core is as important as LLM at the edge. If you use AI to garnish your original code base, I call that LLM at the edge. 

When Notion asks you to summarize stuff, it’s LLM at the edge. The code is still there; everything built for the last thing is still there. They’re just trying to speed up the workflow a little bit. 

New mediums need people who understand them very natively and creatively.

It’s like the early days of the internet on mobile. People started making internet-enabled versions of desktop applications. But that didn’t work. People had to build internet-native applications like Salesforce, Shazam, and Twitter. People couldn’t imagine those things before those revolutions.

It takes some time for people to get the mediums and the new paradigm shifts.

You have to go native, and when building the next generation of applications it’s the same thing. We have to think differently. Whatever you knew before, you have to just try to unlearn it. This is why I didn’t go into venture two years ago; I needed to rewire my brain on how to do this better and think differently. Luckily, I ran into David Okuniev at Typeform, who helped me do that.

LLM at the edge and at the core

Let’s take a look at a few examples of LLM at the edge.

I mentioned Notion and summarization. I don’t want to say anything bad about any of those things because they are very important. Marketing people, we love you all, you need a lot of copy.

But I think of it as LLM at the edge. Now LLM at the core, with things like Copilot, technology that’s coming, and things like Formless. We created a tool within Formless called Data Pilot.

Input came as conversations, no more forms. It was infinitely configurable. Formless could have a million conversations and a million customers, each different, each customized to them.

We would even change the voice depending on who they are. If you start speaking French, it’ll ask you questions and collect data in French. Then, we took that data and transformed it back into rows in a proprietary process, and you could ask questions of the data.

We’ve tried to be native about everything we’ve invented, giving people all the flexibility of humanness, but on the back end, we’ve been able to collect that data properly.

This is back to resilience and observability. The point of LLM at the core is that you no longer have to have brittle code; you have to deal with the humanness of humans. It matches us better.

The cost of AI-driven development

One of the main things that will transform the world is that it’s not just that we’ll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall. 

In 2008, the cost of building a good application could have been a million dollars; that’s what you asked your VC for, and it took a while to get there. When I was building a second startup, it cost a quarter million to half a million.

In the future, it will take fifty grand to build a really good MVP at product market fit. LLM at the core will bring down the cost, changing how venture capital is done. If you note only fifty grand, then friends and family rounds will go a very long way, so you could build an interesting company that might make ten million dollars in ARR at some point in the future.

“One of the main things that will transform the world is that it’s not just that we’ll have different applications. As a venture person, maybe the most important thing about this is that the cost of building applications will fall.”

The durability of workflows in the age of AI

Not everything changes with AI. 

I’m a builder, and so this is very important for me to say to people who care about building companies and building products. Not everything will change. I’ll tell you why, because ultimately, humans don’t care about AI.

People just care about their workflow. All human endeavor, especially at work, is just the workflow. But a spec in a product really shows how software should behave and how humans should use it.

That’s what it was. It was technology first. It’s here’s how we do the software, then it’s humans do this, press this button, and so on. And we try to make it human, but we’re limited.

Then we started doing use cases, and it was better – it was, “how do people want to use a thing?”

The universal lesson I’ve learned from 20 years of doing that is that it’s all about workflows.

How do people want to work? 

Let’s just say marketing. There are a thousand different ways people do marketing, and probably five of them are the best. Good software encapsulates the workflow and makes it faster.

What doesn’t change is that people’s workflows are durable because we’re humans and because Homo Sapiens have been around for 50,000 years.

Marketing isn’t that different from how it was a thousand years ago, just new tools. Socialization isn’t that different either, which is what encapsulates social media and entertainment; all those things are durable.

The role of AI in enhancing workflows

It’s important to understand this because AI is a tool, and what it does is speed up workflows. It makes workflows faster, more powerful, or cheaper.

These are the fundamentals of building value through products and what companies do.

If you add AI, you can shorten the workflows needed even further and unlock additional value.

As AI hallucinates, there are things to be wary about, like accuracy. If you get the first acceleration and people have to tweak it to get it perfect, it will eat up all the acceleration you did and undo the productivity.

So, workflows are durable. If you, as a company and a product, focus on time to value on workflows, and how to make the same durable workflows better, you will prosper, and AI will become a means to an end, which is what it should be.

A lot of companies run around through Vellum and say, “Oh, we need to add AI to our product.” What’s the use case? “We don’t know. We just need it to be AI-driven.”

That’s the worst thing. If you’re throwing away money, don’t do it. Just don’t. Trust me.

Workloads don’t change; AI can make them faster and deeper and give you superpowers. That’s really what it’s about. 

The impact of GPT-3.5 on Formless

Typeform Labs is a gift. I had a product organization focused on this 100 million ARR product, and I hired Typeform Labs, which could do some crazy interesting things, and the co-founder, former CEO, was the person who led it.

When GPT-3.5 came out, we thought about how we could rebuild platforms if there were an AI-centered application.

We made some key decisions. One of the key decisions is that we weren’t just going to build AI into Typeform.com. Very key. AI went into Typeform.com, but this wasn’t what Labs would focus on.

We thought if we tried it, it would take forever. So, once we try to build and retrofit an existing application, it’s so sensitive.

100 million ARR, you have to protect it. It’s a classic innovators’ dilemma. “I can’t make a mistake. If I make a mistake, my CFO will be angry.”

The process of disrupting ourselves

We decided to build something entirely new, and we came up with a few principles. We decided to disrupt ourselves; we’re going to pretend that Typeform is a company we want to take over to build this thing, and we start to ask ourselves, “What are the core workflows? What are the things that create value in the first place? How do we distill that so that we can focus on that?” 

It goes back to the workflow conversation.

In our case, it was things like no-code design interaction of customers, beautiful interaction, presentation, and data. And we wanted to be native. We wanted to build everything. The thing about native AI applications is that there’s a formula. 

There’s a foundation model, whether it’s open-source or not, and you add your own data models to it. That’s what gives you a little bit of a moat, otherwise OpenAI is going to come and eat your lunch. 

We had 100 million form responses that we could create and train custom AI on, which we could add to the foundation – we were using OpenAI at the time. And then you build experiences around it that are very customer-centric.

Challenges in building a native AI platform

The foundation model is easy; your own thin layer of model is hard because you have to train it yourself. The UI that wraps it is very customer-centric and can be hard; UI is very important, and people always miss it. 

That’s what we wanted to be native AI, so that was our formula, that’s what we wanted to do, and that’s what we did.

It turns out that prompts are code. They’re literally like lines of code; they have to be versioned. When you change out GPT-3 to GPT-4.0, some of your prompts don’t work as well.

They start to give you errors, and you have to version them. The version has to go to the model you’re using. If you slip an entropic in between, it behaves differently. That’s something that we don’t deal with. Code is code; whether it’s Python or React, or whatever, it just works.

There are new problems with building things with AI at the core. Testing is crazy, because there’s no determinism; it’s not predictable. You have to suppress hallucinations, and then pricing. One day, it will cost you five cents a token; the other, it’ll cost you one cent.

How do you price it for customers? 

From LLMs to LAMs: Pioneering AI’s multimodal future
Explore the leap from Large Language Models to Large Action Models, unveiling a new era in AI that transcends text to understand a world of data.

Formless and its AI development process

We went through this process for six months to a year, creating Formless, down in the guts, working, playing, talking to customers, and covering all these hard problems.

Typeform.com, we decided to put a lot of AI into it. There are lots of problems we could solve: how do we increase time to value in time form itself? How do we make mobile easier? We had a perennial problem: people don’t like to make forms on mobile because of the builder experience. 

But if you build a system where you just tell the AI what to do and what kind of form to create and tell it to do it on mobile, it will make it for you, and it’s right there. Therefore, mobile creation became a real thing, and about 30% of our customers were on mobile devices, which was amazing.

AI’s role in enhancing customer experience

When people are coming into the application, how do you increase their time to value and get them activated? 

These models know about your company. If you say salesforce.com, they know you. Because if your company’s big enough, the model knows you without us doing anything. So, people would come in and sign up, we would look for their company, grab the logos from their company, and we would pre-make forms that work for them 90% through our growth process.

Immediately, the second they came into Typeform, there was something they could use. Amazing. It’s a game changer for our team’s growth process.

Long story short, acceleration, usability, and making complicated choices simple – we saw about 30-50% feature return. This is important; there are so many AI features I hate, I don’t use Notion, summarization, and so on. So, it’s been very important to see people returning to those features.

The impact of AI on user experience

I asked my team to add a new option created with AI and move it to the first spot because it worked, people loved it.

In fact, people said it’s why they chose us; they said, “Oh wow, you guys have AI? Okay, we’re buying it.” We were a little more expensive, but they bought us anyway, which is good. 

KPIs.

This isn’t exhaustive, but new ways exist to measure AI features. People say, “Just add AI to my stuff”, and that won’t work. 

One way is time to value. How quickly do customers experience value? If you use AI properly, people should experience value faster because it abstracts a bunch of problems.

You should measure this. With good usability, teams will measure clicks to a particular goal.

Of course, clicks equal to time. You should measure time to value. What’s the average time to value before people get done with half of computer tasks if you’ve added AI to it? It should probably be 2x; that’s what you should be shooting for minimum. 

If you try to get 3x, if you try to get 5x, if you can. If people realize the value quickly, they will pay for it. People actually feel 3x acceleration. People feel it in their bones.

Workflow length and tweak time metrics

Workflow length is sort of the opposite. How long is the workflow now? My UX people would lay out everything needed to complete a workflow. You could say, “I want to set up a lead generation form with scoring. What are the things that I need to do?” And they’ll lay it out.

And I would say, okay, let’s do this with AI, with our AI features, and then they’ll measure that. So, we do a ratio, and that’s workflow length. How long did the workflow take this time? People think about workflows and how long it takes. You can figure out a process to lay workflows end-to-end and see how much they shorten over time. 

There’s something we call tweak time.

Because AI isn’t perfect and because it hallucinates, the thing you make, the form you make with AI, might not be perfect.

It took me 30 minutes to create a very complicated form; it now takes me five minutes to generate it with AI. How long does it take me to make it perfect? Is it five minutes?

In which case, I’m now ten minutes long. Now convert to 30 minutes, and it’s still 3x better off. But if it takes another 20 minutes to tweak it to get it to what I need, what has happened? You’ve lost all productivity.

Doesn’t matter, right? It feels magical upfront, and the tweak time depresses you and depresses your customer; it doesn’t work. You should measure tweak time as well, which is what people don’t capture. And then future return, how many times do people want this again?

This is the ultimate thing about building products: people have to want it, and people have to keep coming back. We saw a 30-50% return, so we’re very happy with that.

Very few people have read that paper. The thing that you owe yourself to do to become good at this is to read this paper, and you should follow AI. You should use AI every day. I have a tool called LM studio.

It’s just a way to import all the models that are free and chat with them, and test them; you should be doing that every day in addition to using things like Anthropics, Claude, and so on to power your stuff. 

Transformative AI is here to stay. It’s just incredible technology. It’s still matrix math, and it’s still predictive, but it’s really amazing. Especially when you see multi-modal things like SORA and image generation, things that can show reality, which is what Omni does.

“If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed.”

LLM at the core will win

Everyone is still learning how to paint, but I’ll tell you this: if you learn how to paint better before everyone else, you have an advantage. I’m not going to say the first mover advantage because I don’t really believe in that, but you have a slight advantage. 

Because it means you can go further faster, so you need to do that. It will drive down the cost of building, and if anything, this is the thing that’s going to change our world.

Software is eating the world. Software is getting people to build to the point of business scale.

It’s going to transform software, it’s going to transform investing, it’s going to transform everything.

If it takes 50,000 dollars to make a product market fit for a company that could generate 20 million dollars, then the world has already changed.

LLM at the core will win. 

If you have code that’s been out there, just try to tweak it and add a few things, then someone will eat your lunch, guaranteed, at some point. Now, I don’t want to discourage you; change has to be managed.

You have this thing, so don’t scrap it, but think about how competitive your industry is, how much focus is in there, and how quickly you go to change the game. 

And then don’t forget to measure the right thing. AI is a tool; people just want their workflow to work, they want it to be faster, they want to be rigorous, they don’t care about AI.

“But this company does AI.”

No one cares. 

The market cares, but if you can’t produce, the advantage for customers will not work for you. It’ll be one of those pump and dumps.

Breaking the bro culture: Why we need more women in tech and AIBreaking the bro culture: Why we need more women in tech and AI

 The dawn of artificial intelligence (AI) was marred by a disturbing reality: systems designed for facial recognition consistently misidentified women and individuals with darker skin tones.

The repercussions extended beyond mere inconvenience; they were profoundly damaging, leading to wrongful arrests and the perpetuation of harmful stereotypes. This wasn’t a simple technical glitch. It was a glaring reflection of the predominantly male teams that built the technology, highlighting a fundamental flaw in the industry’s composition.

This narrative isn’t isolated. Across the tech landscape, a recurring pattern emerges: a lack of diversity that yields outcomes that are, at best, biased and, at worst, deeply harmful.

Despite its claims to innovation, the industry remains entrenched in an antiquated “bro culture” that marginalizes women and stifles diversity. The consequences of this exclusion reverberate far beyond the workplace, impacting the very technology that shapes our world.

The unseen costs of bro culture

The tech industry has long been dominated by a “bro culture” that elevates male perspectives and diminishes the contributions of women. This culture manifests in subtle and overt ways, from being interrupted or talked over in meetings to being passed over for promotions. The result is an industry where women are chronically underrepresented, especially in leadership roles.

However, the ramifications of this culture extend beyond the individual women affected. By sidelining women, the tech industry forfeits the innovation that springs from diverse perspectives.

Extensive research consistently demonstrates that diverse teams are more creative, more effective, and more likely to generate groundbreaking solutions. Yet, the industry remains stubbornly homogenous, clinging to a culture that is increasingly misaligned with its aspirations for progress.



A personal lens

Neja, a talented software engineer, shared her experiences navigating the challenges of a male-dominated tech environment. She recounted instances where she was the sole woman in team meetings, her ideas often dismissed or appropriated, while her male colleagues received recognition for her work. Neja’s story, unfortunately, resonates with countless women in the field.

To bridge the gender gap in tech and AI, we need a multifaceted approach that transcends good intentions. Concrete actions and accountability measures are essential to create an environment where women can flourish. In Neja’s words, “It’s not enough to open doors; we must build pathways that lead to the boardroom.”

Leadership accountability is paramount. Setting measurable diversity goals and regularly assessing progress are critical steps in shifting the culture and empowering more women to pursue careers in technology.

The imperative of diverse voices in AI development

The urgency for diversity is most pronounced in the realm of artificial intelligence. The World Economic Forum’s Global Gender Gap Report 2023 reveals a stark reality: only 22% of AI workers are women. This statistic underscores the profound gender disparity in the field and emphasizes the critical need to increase women’s participation.

AI systems are trained on massive datasets. If these datasets are biased, the AI will replicate and even amplify these biases. We’ve witnessed the damage this can inflict, from facial recognition software that misidentifies people of color to hiring algorithms that discriminate against women. These problems don’t originate from malice; they arise from the absence of diverse voices during the development process.

When women and other underrepresented groups are excluded from AI development, their perspectives and experiences are omitted from the data and algorithms.

This can lead to technology that fails to serve everyone equitably or, worse, actively harms marginalized groups. To build AI systems that are fair, equitable, and effective, it’s imperative to include diverse voices at every stage of development. It’s not just about mitigating bias; it’s about creating technology that works for everyone.

“It’s not enough to open doors; we must build pathways that lead to the boardroom.”

Women in leadership: Charting the course for technology’s future

Diversity in tech isn’t solely about numbers; it’s about influence. It’s insufficient to simply have more women in the room—they need to occupy leadership positions where they can shape the trajectory of technological advancements. Women leaders bring unique perspectives that are indispensable for ensuring that technology is developed with ethics, inclusivity, and societal impact in mind.

Without diverse women in leadership roles, the tech industry risks perpetuating a path where innovation benefits the few at the expense of the many. When women lead, they introduce fresh ideas, challenge assumptions, and champion practices that are more equitable.

This is particularly crucial in AI, where the stakes are high, and the potential for both positive and negative impacts is immense. Women leaders can guide the industry toward a future where technology is not only innovative but also ethical and inclusive.

Forging a more inclusive future

Addressing the gender imbalance in tech necessitates more than just well-meaning intentions. It demands concrete actions that foster an environment where women can thrive.

This includes implementing policies that promote diversity and inclusion, establishing mentorship and sponsorship programs, and holding leadership accountable for cultivating a supportive culture. It also entails elevating women into leadership roles where they can directly influence the future of technology.

Companies must re-evaluate how they promote and support women, ensuring they have access to high-visibility projects and clear pathways to leadership. It’s not enough to open doors; we must construct pathways that lead to the boardroom. Leadership accountability is crucial.

Setting measurable goals for diversity, regularly assessing progress, and celebrating the contributions of women in tech are key steps in transforming the culture and inspiring more women to pursue careers in technology.



A clarion call

The tech industry stands at a critical juncture. It can either cling to outdated norms and impede its own growth or embrace diversity and inclusion as the catalysts for innovation and success. Dismantling the barriers of bro culture isn’t just about achieving equality; it’s about creating superior technology that benefits all of humanity.

By elevating diverse women into leadership roles, we ensure that technology evolves in ways that are groundbreaking, ethical, and inclusive. The stakes are high—not just for women but for the future of the entire industry and society as a whole. This isn’t simply a matter of doing what’s right; it’s a strategic imperative for building a more just and equitable future.

Learn more about about bias in AI – check out the article below.

Bias in AI: Understanding and mitigating algorithmic discrimination
Explore how steering AI responsibly, like driving a car, requires understanding and mitigating biases for society’s safety and fairness.

Regulating artificial intelligence: The bigger pictureRegulating artificial intelligence: The bigger picture

Artificial intelligence: The impact of hype, economics and law

Artificial Intelligence (AI) continues to be a subject dominated by hype across the globe. According to McKinsey’s technology trends outlook 2024, 2023 saw $36 billion of equity investment in Generative Artificial Intelligence, whereas $86 billion was invested in applied AI [1].

Currently, the UK AI market is worth in excess of £16.8 billion, with forecasted growth of over £801.6 billion by 2035 [2], reflecting the sizeable economic and technological traction AI is taking across sectors. 

Through the application of Computer Vision technology, for example, Marks and Spencer saw over 10 weeks an 80% reduction in warehouse accidents: just one of many ways in which AI is making a difference [3]. It however remains to be seen how effective coordinated governance will allow for innovation to thrive whilst maintaining cross-sector compliance.

Whilst the United Kingdom’s wider ambition is to be an AI Superpower, there has been continued debate and scrutiny about what constitutes effective AI regulation and how any continued iteration of such regulation would remain in alignment with key principles of law.



The United Kingdom’s vision for AI

The now-opposition government back in 2023 published its white paper, AI Regulation: A Pro-Innovation Approach. The plans outlined a principles-based approach to governance which was delegated to individual regulators.

While at the time it was thought that the UK’s approach and existing success in AI was down to effective regulator-led enforcement combined with technology-neutral legislation and regulations, the pace of AI highlighted gaps – both in opportunities and challenges – that would require addressing.

In the run-up to the 2024 UK General Election, regulation was of high importance in the Labour party’s manifesto under the “Kickstart economic growth” section, with the now-incumbent government seeking to strengthen AI regulation in specific areas. 

Keir Starmer – both prior to and post-election – emphasised the need for tougher approaches to AI regulation through, for example, the creation of a Regulatory Innovation Office (RIO) [4]. The aim of a Regulatory Innovation Office would, inter alia, set targets for technology regulators and monitor decision-making speed against core international benchmarks while providing guidance according to Labour’s higher-level industrial strategy. 

It, however, is not a new AI regulator and instead it will still be up to existing regulators to address AI within their specific fields. It is yet to be seen how a Regulatory Innovation Office would differ from the AI Safety Institute, the first state-backed organisation advancing AI safety established by the Conservative Government at the beginning of 2024 [5].

In addition to a new regulatory office, the planned creation of a National Data Library initiative aims to bring together existing research programmes and data-driven public services with strong safeguards and public benefit at its heart [4].

Wider issues in regulating AI

Government plans and economic potential aside, there are increasing expectations AI will solve the most pressing issues facing humanity. However, as a result of the pace there is a wider endemic issue of digital technologies challenging the functioning of law. In the long run, both a proportionate and future proof regulatory approach will be required regardless of where in the world approaches are developed.

To start with, defining AI is not straightforward: there is not a widely accepted definition, and considering various strands of sciences are affected either directly or indirectly by AI there is a risk of creating individualised definitions based on the specific field. Moreover, different types of intelligence could result in varying definitions of AI, even if the technological scope is not considered. 

Adding into the mixture the fields of Computer Science and Informatics – both not being directly mentioned in the AI Act, for example – demonstrates a lack of a commonly agreed technical definition of what AI is or could be. What follows from this are both general and theoretical questions and how this could be moulded into a legal definition. 

If, for example, both the principles of legal certainty and the protection of legitimate interests are taken, the existing definition of AI does not satisfy key requirements for legal definitions. The result instead is definitions that are ambiguous and debatable in practicability, creating a bottleneck in formulating domestic or even international AI regulation.

What is ultimately important is that any regulatory goal is aligned with the values of fundamental rights and the concrete protection of legal rights. Take the precautionary principle an approach to risk management – which outlines that if a policy or action causes harm to the public and there is not a scientific agreement on the issue, that policy or action in question should not be carried out.

Applying this to AI becomes problematic as the effects in many cases are either not assessable just now or, in some cases, not at all. If then a risk assessment is carried out according to the proportionality principle – where the legality of an action is determined by the balance between the objective, means, and methods as well as the consequences of the action – where limited factual knowledge is obtainable, the actionability of such assessment becomes increasingly challenging.

Instead, it is the intersection of the technical functionality and the context of the application where a risk profile of an AI system can be obtained, but even then from a regulatory perspective these systems can vastly differ in risk profile.



Conclusion

The versatility of AI systems will present a range of opportunities and challenges depending on who uses them, what purposes they are used for and the resulting risk profiles. Attempting to regulate AI – which frankly speaking is an entire phenomenon with increasingly infinite branches of use cases – through a generalised Artificial Intelligence Act will not work.

Instead, deep-diving into the characteristics and the use cases of the differing algorithms and AI applications is more important and is strategically more likely to result in effective, iterative policymaking that is beneficial to society and innovation. 

Bibliography 

[1] McKinsey Tech Outlook 2024: www.mckinsey.com. (n.d.). McKinsey Technology Trends Outlook 2022 | McKinsey. [online] Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/ our-insights/the-top-trends-in-tech#/. 

[2] AI Growth and Adoption: Hooson, M. (2024). UK Artificial Intelligence (AI) Statistics And Trends In 2024. [online] Forbes Advisor UK. Available at: https://www.forbes.com/uk/advisor/ business/software/uk-artificial-intelligence-ai-statistics-2024/. 

[3] M&S Computer Vision Example: Protex.ai. (2023). Marks and Spencer reduced incidents by 80% in their first 10 weeks of deployment. [online] Available at: https://www.protex.ai/case-studies/ marks-and-spencer#:~:text=This%20momentum%20led%20to%20an [Accessed 5 Sep. 2024]. 

[4] Labour Party Manifesto: The Labour Party. (2024). Kickstart economic growth – The Labour Party. [online] Available at: https://labour.org.uk/change/kickstart-economic-growth/#innovation [Accessed 30 Aug. 2024]. 

[5] AI Safety Institute: Aisi.gov.uk. (2024). The AI Safety Institute (AISI). [online] Available at: https://www.aisi.gov.uk [Accessed 30 Aug. 2024]. 

Interested in more from Ana? Make sure to give the articles below a read:

Ana Simion – AI Accelerator Institute
CEO @ INRO London | AI Advisory Council | Advisor in Artificial Intelligence | Keynote Speaker

AI inference in edge computing: Benefits and use casesAI inference in edge computing: Benefits and use cases

As artificial intelligence (AI) continues to evolve, its deployment has expanded beyond cloud computing into edge devices, bringing transformative advantages to various industries.

AI inference at the edge computing refers to the process of running trained AI models directly on local hardware, such as smartphones, sensors, and IoT devices, rather than relying on remote cloud servers for data processing.

This rapid evolution of the technology landscape with the convergence of artificial intelligence (AI) and edge computing represents a transformative shift in how data is processed and utilized.

This shift is revolutionizing how real-time data is analyzed, offering unprecedented benefits in terms of speed, privacy, and efficiency. This synergy brings AI capabilities closer to the source of data generation, unlocking new potential for real-time decision-making, enhanced security, and efficiency.

This article delves into the benefits of AI inference in edge computing and explores various use cases across different industries.

Fig 1. Benefits of AI Inference in edge computing

Real-time processing

One of the most significant advantages of AI inference at the edge is the ability to process data in real-time. Traditional cloud computing often involves sending data to centralized servers for analysis, which can introduce latency due to the distance and network congestion.

Edge computing mitigates this by processing data locally on edge devices or near the data source. This low-latency processing is crucial for applications requiring immediate responses, such as autonomous vehicles, industrial automation, and healthcare monitoring.

Privacy and security

Transmitting sensitive data to cloud servers for processing poses potential security risks. Edge computing addresses this concern by keeping data close to its source, reducing the need for extensive data transmission over potentially vulnerable networks.

This localized processing enhances data privacy and security, making edge AI particularly valuable in sectors handling sensitive information, such as finance, healthcare, and defense.

Bandwidth efficiency

By processing data locally, edge computing significantly reduces the volume of data that needs to be transmitted to remote cloud servers. This reduction in data transmission requirements has several important implications; it results in reduced network congestion, as the local processing at the edge minimizes the burden on network infrastructure.

Secondly, the diminished need for extensive data transmission leads to lower bandwidth costs for organizations and end-users, as transmitting less data over the Internet or cellular networks can translate into substantial savings.

This benefit is particularly relevant in environments with limited or expensive connectivity, such as remote locations. In essence, edge computing optimizes the utilization of available bandwidth, enhancing the overall efficiency and performance of the system.



Scalability

AI systems at edge can be scaled efficiently by deploying additional edge devices as needed, without overburdening central infrastructure. This decentralized approach also enhances system resilience. In the event of network disruptions or server outages, edge devices can continue to operate and make decisions independently, ensuring uninterrupted service.

Energy efficiency

Edge devices are often designed to be energy-efficient, making them suitable for environments where power consumption is a critical concern. By performing AI inference locally, these devices minimize the need for energy-intensive data transmission to distant servers, contributing to overall energy savings.

Hardware accelerator

AI accelerators, such as NPUs, GPUs, TPUs, and custom ASICs, play a critical role in enabling efficient AI inference at the edge. These specialized processors are designed to handle the intensive computational tasks required by AI models, delivering high performance while optimizing power consumption.

By integrating accelerators into edge devices, it becomes possible to run complex deep learning models in real time with minimal latency, even on resource-constrained hardware. This is one of the best enablers of AI, allowing larger and more powerful models to be deployed at the edge. 

Offline operation

Offline operation through Edge AI in IoT is a critical asset, particularly in scenarios where constant internet connectivity is uncertain. In remote or inaccessible environments where network access is unreliable, Edge AI systems ensure uninterrupted functionality.

This resilience extends to mission-critical applications, enhancing response times and reducing latency, such as in autonomous vehicles or security systems. Edge AI devices can locally store and log data when connectivity is lost, safeguarding data integrity.

Furthermore, they serve as an integral part of redundancy and fail-safe strategies, providing continuity and decision-making capabilities, even when primary systems are compromised. This capability augments the adaptability and dependability of IoT applications across a wide spectrum of operational settings.

Customization and personalization

AI inference at the edge enables a high degree of customization and personalization by processing data locally, allowing systems to deploy customized models for individual user needs and specific environmental contexts in real-time. 

AI systems can quickly respond to changes in user behavior, preferences, or surroundings, offering highly tailored services. The ability to customize AI inference services at the edge without relying on continuous cloud communication ensures faster, more relevant responses, enhancing user satisfaction and overall system efficiency.

The traditional paradigm of centralized computation, wherein these models reside and operate exclusively within data centers, has its limitations, particularly in scenarios where real-time processing, low latency, privacy preservation, and network bandwidth conservation are critical.

This demand for AI models to process data in real time while ensuring privacy and efficiency has given rise to a paradigm shift for AI inference at the edge. AI researchers have developed various optimization techniques to improve the efficiency of AI models, enabling AI model deployment and efficient inference at the edge.

In the next section we will explore some of the use cases of AI inference using edge computing across various industries. 



Use cases

The rapid advancements in artificial intelligence (AI) have transformed numerous sectors, including healthcare, finance, and manufacturing. AI models, especially deep learning models, have proven highly effective in tasks such as image classification, natural language understanding, and reinforcement learning.

Performing data analysis directly on edge devices is becoming increasingly crucial in scenarios like augmented reality, video conferencing, streaming, gaming, Content Delivery Networks (CDNs), autonomous driving, the Industrial Internet of Things (IoT), intelligent power grids, remote surgery, and security-focused applications, where localized processing is essential.

In this section, we will discuss use cases across different fields for AI inference at the edge, as shown in Fig 2.

Fig 1. Applications of AI Inference at the Edge across different fields

Internet of Things (IoT)

The expansion of the Internet of Things (IoT) is significantly driven by the capabilities of smart sensors. These sensors act as the primary data collectors for IoT, producing large volumes of information.

However, centralizing this data for processing can result in delays and privacy issues. This is where edge AI inference becomes crucial. By integrating intelligence directly into the smart sensors, AI models facilitate immediate analysis and decision-making right at the source.

This localized processing reduces latency and the necessity to send large data quantities to central servers. As a result, smart sensors evolve from mere data collectors to real-time analysts, becoming essential in the progress of IoT.

Industrial applications

In industrial sectors, especially manufacturing, predictive maintenance plays a crucial role in identifying potential faults and anomalies in processes before they occur. Traditionally, heartbeat signals, which reflect the health of sensors and machinery, are collected and sent to centralized cloud systems for AI analysis to predict faults.

However, the current trend is shifting. By leveraging AI models for data processing at the edge, we can enhance the system’s performance and efficiency, delivering timely insights at a significantly reduced cost.

Mobile / Augmented reality (AR)

In the field of mobile and augmented reality, the processing requirements are significant due to the need to handle large volumes of data from various sources such as cameras, Lidar, and multiple video and audio inputs.

To deliver a seamless augmented reality experience, this data must be processed within a stringent latency range of about 15 to 20 milliseconds. AI models are effectively utilized through specialized processors and cutting-edge communication technologies.

The integration of edge AI with mobile and augmented reality results in a practical combination that enhances real-time analysis and operational autonomy at the edge. This integration not only reduces latency but also aids in energy efficiency, which is crucial for these rapidly evolving technologies.

Security systems

In security systems, the combination of video cameras with edge AI-powered video analytics is transforming threat detection. Traditionally, video data from multiple cameras is transmitted to cloud servers for AI analysis, which can introduce delays.

With AI processing at the edge, video analytics can be conducted directly within the cameras. This allows for immediate threat detection, and depending on the analysis’s urgency, the camera can quickly notify authorities, reducing the chance of threats going unnoticed. This move to AI-integrated security cameras improves response efficiency and strengthens security at crucial locations such as airports.

Robotic surgery

In critical medical situations, remote robotic surgery involves conducting surgical procedures with the guidance of a surgeon from a remote location. AI-driven models enhance these robotic systems, allowing them to perform precise surgical tasks while maintaining continuous communication and direction from a distant medical professional.

This capability is crucial in the healthcare sector, where real-time processing and responsiveness are essential for smooth operations under high-stress conditions. For such applications, it is vital to deploy AI inference at the edge to ensure safety, reliability, and fail-safe operation in critical scenarios.

Computer vision meets robotics: the future of surgery
Max Allan, Senior Computer Vision Engineer at Intuitive, describes groundbreaking robotics innovations in surgery and the healthcare industry.

Autonomous driving

Autonomous driving is a pinnacle of technological progress, with AI inference at edge taking a central role. AI accelerators in the car empower vehicles with onboard models for rapid real-time decision-making.

This immediate analysis enables autonomous vehicles to navigate complex scenarios with minimal latency, bolstering safety and operational efficiency. By integrating AI at the edge, self-driving cars adapt to dynamic environments, ensuring safer roads and reduced reliance on external networks.

This fusion represents a transformative shift, where vehicles become intelligent entities capable of swift, localized decision-making, ushering in a new era of transportation innovation.

Conclusion

The integration of AI inference in edge computing is revolutionizing various industries by facilitating real-time decision-making, enhancing security, and optimizing bandwidth usage, scalability, and energy efficiency.

As AI technology progresses, its applications will broaden, fostering innovation and increasing efficiency across diverse sectors. The advantages of edge AI are evident in fields such as the Internet of Things (IoT), healthcare, autonomous vehicles, and mobile/augmented reality devices.

These technologies benefit from the localized processing that edge AI enables, promising a future where intelligent, on-the-spot analytics become the standard. Despite the promising advancements, there are ongoing challenges related to the accuracy and performance of AI models deployed at the edge.

Ensuring that these systems operate reliably and effectively remains a critical area of research and development. The widespread adoption of edge AI across different fields highlights the urgent need to address these challenges, making robust and efficient edge AI deployment a new norm.

As research continues and technology evolves, the potential for edge AI to drive significant improvements in various domains will only grow, shaping the future of intelligent, decentralized computing.

Want to know more about how generative companies are using AI?

Get your copy of our Gen AI report below!

Generative AI 2024 report
Unlock the secrets to faster workflows with the Generative AI 2024 Report. Learn how 56.4% of companies leverage AI to boost efficiency and stay competitive.

How big companies risk obsolescence without generative AIHow big companies risk obsolescence without generative AI

Generative AI is no longer a futuristic concept—it’s a transformative force reshaping today’s most innovative industries. Companies like Klarna and J.P. Morgan are making bold moves by integrating Generative AI into their operations, challenging the status quo and enabling unprecedented efficiency and creativity.

This shift isn’t merely an incremental upgrade; it’s a paradigm change that allows organizations to automate complex processes, generate creative content, and make data-driven decisions more effectively than ever before.

Yet despite its clear potential, many large companies hesitate, caught in the throes of Clayton Christensen’s “Innovator’s Dilemma.”

They are torn between the safety of their profitable legacy systems and the uncertain but promising path of investing in disruptive technologies like Generative AI. For these companies, the risk extends beyond lagging behind competitors; it’s the danger of becoming irrelevant in a landscape that rewards agility and punishes complacency.

In today’s fast-paced market, comfort zones have become liabilities. Companies that cling to legacy approaches while ignoring the winds of change are playing a dangerous game—one that could end with them being outpaced, outperformed, and ultimately pushed out of the market.

Disruption is relentless: Comfort zones are a liability

Christensen’s “Innovator’s Dilemma” illustrates how companies often lose their edge by focusing on existing products while ignoring larger shifts. Generative AI represents one of these shifts, transforming industries with innovations that enhance efficiency and open new possibilities.

Klarna’s recent decision to move away from well-established SaaS platforms like Salesforce and Workday exemplifies this transformation. By developing an internal AI-driven solution, they are not only replicating but surpassing decades of customization and workflow automation offered by industry giants.

This bold move challenges the narrative of SaaS ‘stickiness’ and highlights how companies that remain in their comfort zones risk being outpaced by more agile competitors.



Consider how Blockbuster, a giant in its heyday, ignored the rise of digital streaming while Netflix evolved from a niche DVD rental service into a streaming powerhouse. Companies that fail to adopt Generative AI today risk a similar fate. Disruptive technologies don’t pause for established players—they redefine industries and leave behind those who can’t or won’t adapt.

Malcolm Gladwell’s “The Tipping Point” emphasizes that transformative shifts often begin subtly, with small, almost unnoticeable changes that eventually reach critical mass.

Many companies entrenched in their comfort zones overlook these initial signs, dismissing them as inconsequential until the tipping point is reached and transformation becomes unavoidable. Finding a balance between existing customer needs and innovation is essential for long-term survival.

Balancing current needs with future vision

While addressing current customer needs is important, it’s equally critical for companies to anticipate future market demands. Generative AI technologies often start small, catering to niche markets or solving problems not immediately apparent to mainstream customers.

However, as these technologies evolve, they can redefine entire industries. Gladwell discussed how niche innovations, initially overlooked or even ridiculed, can suddenly become the next big thing when they reach a tipping point, rapidly gaining acceptance and disrupting established markets.

Focusing solely on present needs can leave companies vulnerable when market dynamics shift. To stay competitive, leaders must balance immediate demands with a clear vision for the future, ensuring their strategies include investments in disruptive technologies like Generative AI.

Klarna’s pivot to Generative AI illustrates the importance of this balance. While traditional SaaS platforms had been integral to their operations, Klarna recognized the potential of AI to streamline processes and reduce complexity.

By standardizing workflows and leveraging AI, they’ve created a more agile solution that meets current demands while positioning themselves for future growth. This move underscores Gladwell’s point about niche innovations gaining rapid acceptance when they reach a tipping point.

AI implementation: Costs and complexity

Having a vision for Generative AI is only part of the equation; the real test lies in execution, where costs and complexities can become formidable barriers. Implementing Generative AI requires a comprehensive, multi-year strategy.

The expenses associated with AI software, infrastructure, and staff training are significant hurdles that can deter many organizations. According to a 2023 report by McKinsey & Company, companies investing in AI can expect to allocate 20-30% of their IT budgets toward AI initiatives.

Klarna’s success wasn’t just about adopting new technology; it involved reengineering their tech stack from the ground up and embracing standardization to reduce complexity.

This approach demanded a significant commitment but resulted in a more agile and cost-effective system. Their experience demonstrates that while the barriers to AI implementation are real, they can be overcome with a strategic, long-term vision.



Integrating Generative AI is not just about acquiring technology—it’s about embedding it into the organizational DNA and aligning it with strategic business goals. This involves substantial investments not only in technology but also in people and processes, requiring a commitment to long-term change rather than short-term fixes.

Organizations that hesitate because of these initial hurdles risk being left behind as others recognize the potential and reach the tipping point where Generative AI shifts from experimental to essential.

Established organizations often value stability and incremental improvements. Generative AI challenges these norms, requiring businesses to rethink how value is created and delivered. The psychological barrier—the fear of undermining one’s own success—can paralyze decision-making and lead companies to stick with what’s safe rather than explore new frontiers.

Success amid challenges: J.P. Morgan

Despite these challenges, companies like J.P. Morgan are successfully navigating this complex landscape. J.P. Morgan has launched an AI-powered chatbot for its research analysts, streamlining access to insights and data across the organization.

This initiative reflects a broader strategy to embed Generative AI within the company’s operations, enhancing decision-making and fostering a culture of agility and innovation. By taking a proactive approach, J.P. Morgan is not just adopting Generative AI—it’s transforming how it does business, setting a blueprint for other companies on how to integrate AI successfully.

While measuring success in Generative AI can be challenging due to the early nature of the technology, the initial benefits are already reshaping business operations. One significant hurdle is establishing clear ROI and KPIs, as many AI projects are still in exploratory stages.

However, a Deloitte survey found that over 50% of early AI adopters reported a positive return on their investment. Leaders need to invest with a long-term vision, understanding that while specific metrics may still be evolving, the transformative impact of AI is increasingly undeniable.

The flexibility of small players: How nimble newcomers are disrupting the status quo

While established companies are adapting, smaller, more agile newcomers are often best positioned to capitalize on Generative AI’s potential quickly. Without the burden of legacy systems and entrenched processes, these newcomers can experiment, adapt, and scale AI initiatives more effectively.

Companies like Writesonic and Gamma.app are leveraging Generative AI to reshape industries such as content creation and business communication. They exemplify how agile players can outmaneuver larger, slower competitors.

As Gladwell describes in “The Tipping Point,” these innovations can shift from fringe concepts to mainstream essentials, catching larger companies off guard when they reach that critical tipping point.

Klarna’s bold strategy doesn’t just signify a shift for one company; it poses critical questions for the entire SaaS industry. If AI enables enterprises to replace decades of deep integration with more agile, customized solutions, the traditional ‘stickiness’ of SaaS platforms is under threat.

This development forces CIOs and IT leaders to reconsider their reliance on established providers and explore the potential of in-house AI-driven solutions. The financial stakes are high, as enterprises could save millions annually by reducing dependence on costly SaaS products.

According to Gartner, organizations can reduce operational costs by 20 – 30% by 2025 through AI-driven efficiencies. Klarna’s example may well be the tipping point that accelerates a broader move away from traditional SaaS, emphasizing the urgent need for companies to adapt or risk obsolescence.



Ethical and social considerations

As companies embrace Generative AI, it’s crucial to address ethical and social considerations. Issues such as data privacy, security, and algorithmic bias can pose significant risks if not properly managed.

A 2022 survey by PwC revealed that over 55% of consumers are concerned about how companies use their personal data. Implementing robust data governance policies and ethical guidelines is essential to build trust with stakeholders and ensure compliance with regulations like GDPR.

Moreover, the impact of AI on the workforce cannot be ignored. While AI can automate routine tasks, it may also lead to job displacement. Companies should invest in retraining and upskilling employees to work alongside AI technologies, fostering a culture of continuous learning and adaptation.

Taking action: A roadmap for embracing generative AI

To move from theory to practice, companies must take deliberate steps to integrate Generative AI into their operations. Here’s how leaders can begin this transformative journey:

Gain executive buy-in: Executive-level support is critical for success.Conduct an AI readiness assessment: Evaluate your organization’s current capabilities, identify gaps, and set clear objectives for AI adoption.Develop a strategic AI roadmap: Align AI initiatives with business goals, prioritize use cases, and create a phased implementation plan.Start with pilot projects: Implement small-scale AI projects to demonstrate value, set measurable metrics, and iterate based on insights.Invest in talent and training: Upskill existing employees, hire specialized talent, and foster a culture of innovation.Address ethical and governance considerations: Establish ethical guidelines, implement governance frameworks, and engage stakeholders transparently.Leverage partnerships and collaborations: Collaborate with AI vendors, join industry consortia, and engage academic institutions.Monitor and measure impact: Set clear KPIs, conduct regular reviews, and scale successful projects.Plan for long-term sustainability: Stay informed on AI developments, budget for ongoing investment, and anticipate future needs.

By following this roadmap, companies can navigate the complexities of Generative AI adoption, mitigate risks, and position themselves for long-term success in an increasingly AI-driven world.

The path forward: Embrace generative AI or face extinction

The lessons from the “Innovator’s Dilemma” speak volumes: focusing solely on today’s successes without investing in disruptive technologies like Generative AI is a risky bet. AI isn’t just another tool; it’s a fundamental shift in how businesses operate.

Companies that fully integrate Generative AI into their operations will not only survive but thrive, setting the pace for their industries. In contrast, those who fail to adapt risk meeting the same fate as Blockbuster and BlackBerry—left behind in a world increasingly driven by AI that rewards the bold and punishes the complacent.

Jim Collins, in “Good to Great,” emphasizes that truly great companies continuously evolve and align their strategies with the future. Klarna’s decision to harness Generative AI reflects this principle, demonstrating proactive leadership and a commitment to innovation.

Their approach serves as a blueprint for other companies: not just to adopt new technology but to redefine their operations and strategies around it. Without such commitment, companies risk stagnation—going from good to gone.

Conclusion

For leaders, the message is simple: adapt, innovate, and lead, or risk becoming a cautionary tale. Klarna and J.P. Morgan’s transformations illustrate that the future belongs to those willing to embrace change and leverage disruptive technologies to their advantage. The decision isn’t just about adopting new technology; it’s about ensuring your company is poised to excel tomorrow.

As Generative AI continues to advance rapidly, the window of opportunity to lead is narrowing. By taking proactive steps—assessing readiness, developing strategic roadmaps, investing in talent, and more—companies can overcome barriers and seize the transformative potential of AI. Embrace the change because Generative AI won’t wait, and neither should you. The time to act is now. 

References

1. Christensen, C. M. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Harvard Business Review Press.

2. Treiber, M. (2023). Klarna’s bold move: What it means for the future of SaaS in the enterprise. IKANGAI. https://www.ikangai.com/klarnas-bold-move-what-it-means-for-the-future-of-saas-in-the-enterprise/

3. Gladwell, M. (2000). The tipping point: How little things can make a big difference. Little, Brown.

4. McKinsey & Company. (2023). The state of AI in 2023: Generative AI’s breakout year. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-AIs-breakout-year

5. J.P. Morgan. (2023). J.P. Morgan introduces AI-powered chatbot for research analysts. J.P. Morgan News. https://www.jpmorgan.com

6. Deloitte. (2022). State of AI in the enterprise, 5th edition. Deloitte. https://www.deloitte.com

7. PwC. (2022). Consumer intelligence series: Trusted tech. PwC. https://www.pwc.com

8. Collins, J. (2001). Good to great: Why some companies make the leap… and others don’t.

Want access to hundreds of hours of expert talks?

Sign up for our Pro+ membership and watch presentations from some of the world’s leading companies in AI.

That’s 100+ hours from all of our events in one convenient place.

AI Accelerator Institute Pro+ membership
Unlock the world of AI with the AI Accelerator Institute Pro Membership. Tailored for beginners, this plan offers essential learning resources, expert mentorship, and a vibrant community to help you grow your AI skills and network. Begin your path to AI mastery and innovation now.

AI for health & networking: Christie Mealo’s tech impactAI for health & networking: Christie Mealo’s tech impact

My name is Christie Mealo, and I’m a Senior AI Engineering Manager at CVS Health, where I focus on AI-driven health products, primarily in the area of diabetes management. 

In addition to my work at CVS, I’m the founder of Orbit, an AI-powered contact book and networking app designed for value-based networking. 

I also lead the Philly Data & AI Meetup group, help guide the Philly Tech Committee, and serve as a chair on Philly iConnect. 

Through these roles, I’m deeply involved in organizing communities and events across Philadelphia and the larger East Coast, helping to foster collaboration and innovation in the tech space.

It’s been a crazy year for those in tech—what’s excited you most about recent developments?

It’s been an incredible year in tech, and what excites me most is how generative AI has significantly lowered barriers to entry and creativity for so many people. This technology is empowering individuals with new and novel ideas, allowing them to bring their visions to life in ways that were previously out of reach. 

I believe this will shake up the economy in a positive way, leading to the development of a lot of innovative products and introducing new competitors into the market. While we’re undoubtedly in the midst of a hype cycle—or perhaps only at the beginning—it’s thrilling to see where this will take us in the coming years.



What role do you see generative AI playing across industries over the next 6-12 months, and where do you think it will have the biggest impact?

Generative AI is poised to significantly impact various industries over the next 6-12 months. While it’s clear that it will continue to transform fields like copywriting, advertising, and creative content, its influence is much broader.

On one hand, generative AI is incredibly exciting because it lowers barriers to entry for innovation and creativity. Tools like ChatGPT, Claude, Gemini, and GitHub Copilot are not only enabling individuals and smaller companies to bring novel ideas to market more quickly but are also optimizing workflows. Personally, these tools have streamlined my day-to-day work, saving me approximately 10 hours each week by automating routine tasks and enhancing productivity.

However, there are valid concerns about the impact of generative AI, particularly regarding its effect on the internet and the truth. As AI-generated content becomes more prevalent, there is a real risk of misinformation and the proliferation of fake information online. This not only threatens the integrity of the internet but also raises ethical questions that need urgent attention.

Interestingly, these challenges are creating new opportunities for AI ethics as a field. We’re likely to see significant job growth in areas focused on developing frameworks and tools to manage these risks, ensuring that AI is used responsibly and that the internet remains a trusted source of information.

While we are only getting started, the balance of benefits and challenges will ultimately shape the economic and social impact of generative AI. It’s an exciting time, but also one that demands careful consideration of the ethical implications.

How can companies effectively navigate the ethical considerations that come with the rapid advancements in AI technology? 

As an ex-McKinsey person myself, I feel compelled to steal some good advice and guidelines they have provided for this one:

Establish clear ethical guidelines: Companies should start by defining ethical principles that align with their values and business goals. These should cover critical areas such as bias and fairness, explainability, transparency, human oversight, data privacy, and security. For instance, ensuring that AI models do not inadvertently discriminate based on race, gender, or other protected characteristics is essential.Implement human oversight and accountability: It’s important to have a “human in the loop” to oversee AI decisions, particularly in high-stakes scenarios like financial services or healthcare. This ensures that there is always a human judgment applied to AI outputs, which can help mitigate risks associated with AI decision-making.Continuous monitoring and adaptation: Ethical AI isn’t a one-time effort. Companies should establish ongoing monitoring systems to track the performance and impact of AI models over time. This includes regular audits to check for biases or inaccuracies that might emerge as the AI system interacts with new data.Educate and empower employees: Building a culture that supports ethical AI requires educating employees across the organization about the importance of these issues. Providing training on ethical AI practices and ensuring that teams are equipped with the necessary tools to implement these principles is crucial for long-term success.

Generative AI is a whole new ballgame, and we still have a lot to learn, but these pillars provide a good start.

What are you excited about at Generative AI Summit Toronto, and why is it important to get together with other leaders like this?

I’m really excited about the opportunity to connect with a diverse group of AI professionals and thought leaders at the Generative AI Summit in Toronto. 

The event will feature cutting-edge discussions on the latest advancements in generative AI, and I’m particularly looking forward to the workshops and panels that provide opportunities to interact directly with experts. It’s important to gather with other leaders in the field to share insights, foster collaboration, and drive innovation in this rapidly evolving space.

Christie will be moderating at AI Accelerator Institute’s Generative AI Summit Toronto.

Join us on Novevember 20, 2024.

Get your tickets below.

Register | Generative AI Summit Toronto | Uniting AI’s builders & execs
Unite with hundreds of pioneering engineers, developers & executives that are facilitating the latest tech revolution.

3 learnings from bringing AI to market3 learnings from bringing AI to market

This article is based on Mike Kolman’s talk at our sister community’s Amsterdam Product Marketing Summit.

Need to bring an AI-powered product to market but don’t know where to start? You’re in the right place. As AI transforms our industry at lightning speed, it’s easy to feel left behind. But don’t worry – I’ve got your back. 

Drawing from my experience at Salesforce, I’ll share three essential learnings to help you navigate the AI landscape with confidence. In this article, we’ll dive into:

The evolution of AIThe AI hype cycle and where we stand todayWhy many AI projects fail and how to set yours up for successThree key learnings from my experience of launching an AI product.

Let’s get into it.

The evolution of AI

It can’t have escaped your notice that we’re in a bit of an AI revolution right now – but how did we get here? Let me set the stage.

For about 30 years, we were in wave one of artificial intelligence – predictive AI, which uses numbers data to generate very simple predictions.

Since 2022, when ChatGPT launched their 3.5 model, anyone working in B2B SaaS has been hearing the terms “generative AI,” “Gen AI,” or “artificial intelligence” about a thousand times a day. This marks the beginning of wave two, which involves using natural language – speaking or writing to a large language model (LLM) – to generate something that didn’t exist before.

We’re already moving rapidly towards the third wave. This involves building autonomous agents that can automate tasks so you don’t have to do them anymore. 

I imagine it won’t be long before we see wave four – artificial general intelligence. Think Terminator!