ماه: اسفند 1403

Microsoft’s Majorana hype: Real proof or just marketing?Microsoft’s Majorana hype: Real proof or just marketing?

Introduction: The quest for reliable qubits

Microsoft’s Majorana hype: Real proof or just marketing?

Quantum computing faces a fundamental challenge: qubits, the basic units of quantum information, are notoriously fragile.

Conventional approaches, such as superconducting circuits and trapped ions, require intricate error-correction techniques to counteract decoherence. Microsoft has pursued an alternative path: Majorana-based topological qubits, which promise inherent noise resistance due to their non-local encoding of quantum information.

This idea, based on theoretical work from the late 1990s, suggests that quantum states encoded in Majorana zero modes (MZMs) could be immune to local noise, reducing the need for extensive error correction. Microsoft has invested two decades into developing these qubits, culminating in the recent “Majorana 1” prototype.

However, given past controversies and ongoing skepticism, the scientific community remains cautious in interpreting these results.

The scientific basis of Majorana-based qubits

Topological qubits derive their stability from the spatial separation of Majorana zero modes, which exist at the ends of specially engineered nanowires. These modes exhibit non-Abelian statistics, meaning their quantum state changes only through specific topological operations, rather than local perturbations. This property, in theory, makes Majorana qubits highly resistant to noise.

Microsoft’s approach involves constructing “tetrons,” pairs of Majorana zero modes that encode a single logical qubit through their collective parity state. Operations are performed using simple voltage pulses, which avoids the complex analog controls required for traditional superconducting qubits.

Additionally, digital measurement-based quantum computing is employed to correct errors passively. If successful, this design could lead to a scalable, error-resistant quantum architecture.

However, while the theoretical framework for Majorana qubits is robust, experimental verification has been challenging. Majorana zero modes do not occur naturally and must be engineered in materials like indium arsenide nanowires in proximity to superconductors.

Establishing that these states exist and behave as expected has proven difficult, leading to past controversies.



Microsoft’s Majorana hype: Real proof or just marketing?

Historical controversies: The 2018 retraction

A major setback for Microsoft’s Majorana initiative occurred in 2018 when researchers, including Leo Kouwenhoven’s team at TU Delft (funded by Microsoft), published a Nature paper claiming to have observed quantized conductance signatures consistent with Majorana zero modes.

This was hailed as a breakthrough in topological quantum computing. However, by 2021, the paper was retracted after inconsistencies were found in data analysis. Independent replication attempts failed to observe the same results, and an internal investigation revealed that a key graph in the original paper had been selectively manipulated.

This event, dubbed the “Majorana Meltdown,” significantly damaged the credibility of Microsoft’s approach. It highlighted the challenge of distinguishing genuine Majorana modes from other quantum states that mimic their signatures due to material imperfections. Many physicists became skeptical, arguing that similar issues could undermine subsequent claims.

Experimental progress and remaining challenges

Despite the 2018 controversy, Microsoft and its collaborators have continued refining their approach. The recent announcement of the “Majorana 1” chip in 2025 presents experimental evidence supporting the feasibility of Majorana-based qubits.

Key advancements include:

  • Fabrication of “topoconductor” materials: Microsoft developed a new indium arsenide/aluminum heterostructure to reliably host Majorana zero modes.
  • Parity measurement success: The team demonstrated that they could measure the qubit’s parity (even vs. odd electron occupation) with 99% accuracy, a crucial validation step.
  • Increased parity lifetime: The qubit’s state exhibited stability over milliseconds, significantly surpassing superconducting qubits’ coherence times (which are typically in the microsecond range).
  • Digital control implementation: Unlike analog-tuned superconducting qubits, Majorana qubits can be manipulated with simple voltage pulses, theoretically enabling large-scale integration.

While these are important steps forward, the experiments have not yet demonstrated key quantum operations, such as two-qubit entanglement via non-Abelian braiding. Until this milestone is achieved, claims about the superiority of topological qubits remain speculative.

Comparison with other qubit technologies

To assess Microsoft’s claims, it is useful to compare Majorana qubits with existing quantum computing platforms:

  • Superconducting qubits (IBM, Google): These have demonstrated successful quantum error correction and multi-qubit entanglement but require extensive calibration and error correction. Fidelity levels for two-qubit gates currently range around 99.9%.
  • Trapped-ion qubits (IonQ, Quantinuum): These offer superior coherence times (seconds vs. microseconds for superconductors) but suffer from slow gate speeds and complex laser-based control.
  • Majorana-based qubits: Theoretically provide built-in error protection, reducing the need for extensive error correction. However, experimental validation is still in progress, and large-scale integration remains untested.

Microsoft has argued that Majorana qubits will enable a quantum computer with a million qubits on a single chip, a feat that conventional qubits struggle to achieve.

While this is an exciting possibility, many researchers caution that scaling challenges remain, especially given the extreme conditions (millikelvin temperatures, precise nanowire fabrication) required for Majorana qubits.

Skepticism from the Scientific Community

Despite recent progress, many physicists remain skeptical of Microsoft’s claims.

Key concerns include:

  1. Lack of direct evidence for Majorana zero modes: While Microsoft’s 2025 Nature paper presents strong supporting data, the scientific community has yet to reach a consensus that Majorana modes have been definitively observed.
  2. Alternative explanations for observed phenomena: Many experimental signatures attributed to Majorana states could be explained by disorder-induced states or other trivial effects in semiconductor-superconductor interfaces.
  3. Unverified large-scale claims: Microsoft’s assertion that its approach will lead to fault-tolerant quantum computing “within years, not decades” is met with skepticism. Experts note that even the most advanced conventional quantum computers are still years away from practical applications, and scaling from an 8-qubit chip to a million-qubit processor is an enormous leap.
  4. Comparison to competing approaches: Some argue that improvements in quantum error correction for superconducting and trapped-ion qubits may render topological qubits unnecessary by the time they are fully realized.

A Promising but unproven path

Microsoft’s Majorana-based qubits represent one of the most ambitious efforts in quantum computing. The theoretical promise of intrinsic error protection and simplified quantum control is compelling, and recent experiments provide encouraging evidence that topological qubits can be realized.

However, historical controversies, ongoing skepticism, and the lack of key demonstrations (such as two-qubit gates) mean that these qubits are not yet a proven alternative to existing technologies.

While Microsoft has made significant strides in overcoming past setbacks, their claims of imminent large-scale quantum computing should be met with caution.

The coming years will be critical in determining whether Majorana qubits will revolutionize quantum computing or remain an elegant but impractical idea. As independent verification and further experiments unfold, the scientific community will ultimately decide whether Microsoft’s bold bet pays off.

Building advanced AI systems: Challenges and best practicesBuilding advanced AI systems: Challenges and best practices

Building advanced AI systems: Challenges and best practices

My name is Akash, co-founder and CEO of Bellum.ai. Our mission is to help companies build reliable AI systems in production. In this talk, I’ll share insights from working with hundreds of companies using AI, highlighting what works, what doesn’t, and where AI development is headed.

The journey to AI innovation

Early experiences with AI

AI has always been on the horizon, but my moment of realization came about four to five years ago, at the beginning of COVID, when I first experimented with GPT-3’s API. It wasn’t perfect—prone to generating random, inaccurate responses—but it demonstrated a capability never seen before: auto-completing sentences in a meaningful way.

At that time, I was working in recruiting software, leveraging AI for tasks like job description generation and email classification. Our AI-powered job description generator went viral, demonstrating the potential for AI-driven automation.

However, implementing these models in production came with significant challenges—prompt engineering, evaluation, and pipeline collaboration were all difficult.

The breakthrough with ChatGPT

When ChatGPT launched in November 2022, it was clear that AI was going mainstream. The challenges we faced with implementing AI in production—reliability, evaluation, and collaboration—became widespread across industries.

Recognizing this, my co-founders and I started Bellum.ai to help businesses effectively leverage large language models (LLMs) and build robust AI systems.

Additionally, my experience at McKinsey provided insight into AI governance and the evolution of AI technologies. Witnessing the rise of GPT models and their growing impact across industries reaffirmed the need for structured AI deployment frameworks.

Revolutionize business onboarding processes in the AI era – KYB solutionRevolutionize business onboarding processes in the AI era – KYB solution

Revolutionize business onboarding processes in the AI era - KYB solution

Manual verification, checking, and onboarding are things of the past. Nowadays, with the emergence of artificial intelligence technology, almost all operations have become streamlined and automated.

Pre-trained algorithms of artificial intelligence and machine learning help organizations reduce manual efforts, which are time-consuming and not free from errors. Human beings can commit mistakes for being fatigued or under workload pressures.

However, AI technology is free from fatigue and workload pressure, and automated checks perform quick actions with just a single click. Therefore, companies have now replaced manual processes with automated ones and are moving toward a streamlined process for all operations. 

Artificial intelligence has revolutionized the business onboarding process and enables organizations to streamline their operations regarding partnerships, investments, and other kinds of collaborations with other entities.

This blog post will highlight the role of AI technology in business onboarding and will explain how it has revolutionized the process. 

How can AI revolutionize the onboarding process? 

Companies have to deal with customers, employees, and other organizations for various purposes. There is a need for a streamlined process for onboarding. Before allowing access to entities on board, it is necessary to verify their authenticity and legitimacy and it is a major part of the onboarding process. 

Traditionally, companies verify entities manually, and perform all the steps included in the onboarding process with human efforts. Employees collect various documents, analyze them, verify them, and then onboard entities.

However, it is no longer needed, companies can now verify entities remotely and streamline their onboarding process. They can simply employ artificial intelligence technology to streamline mind verification and the onboarding process.

Many companies offer advanced solutions that involve AI technology in their operation and provide a streamlined onboarding process for customers and businesses. 

Companies that onboard other organizations as their customers or as partners can utilize the Know Your Business (KYB) service, which involves AI technology in all the operations, offers the utmost security from fraud and offers a streamlined onboarding process.

The KYB solution involves AI checks and verifying entities quickly to find their risk potential. It helps to make well-informed decisions regarding onboarding. 

Role of automation in business verification and onboarding 

Artificial intelligence (AI) offers an automated service for business verification, customer authentications, and the onboarding process. Companies that employ AI technology in their operations can reduce 50% time to onboard new entities.

Pretrained algorithms within the business verification and onboarding process offer a thorough screening in the form of cross-checking and verification in one click. With the help of artificial intelligence, companies have devised an all-in-one solution for streamlined onboarding processes, such as Know Your Business(KYB). 

In an automated process, companies verify collected documents through automated checks, which highlight risk potential in case the documents are forged or fake.

Fraudsters generate fake identity papers which are difficult to be identified manually through the human eye. Companies need artificial intelligence sharp detectors to identify fraudster tactics. Therefore, aLong with a streamlined onboarding process, automated checks of AI offer the utmost security from fraudsters.

Businesses that have to deal with other business entities can easily identify shell companies through advanced verification solutions. 

How does a streamlined business onboarding process contribute to growth and success? 

The streamlined onboarding process always becomes the customer center of attraction. Nobody prefers time-consuming and complex processes for verification and onboarding.

Artificial intelligence offers a streamlined-onboarding process and enables organizations to enhance their users’ interest and satisfaction. Through remote and digital verification, which involves AI, companies can enable their users and clients to get verified while sitting at home. It makes organizations reliable and credible.

Moreover, as automated service offers the most accurate results, it makes a business credible and trustworthy to attract more and more clients and grab business opportunities.

Therefore, a streamlined verification and onboarding process contributes to business growth and success as it enables organizations to onboard more and more clients and become partners with highly successful businesses.

Organizations that do not have a streamlined business onboarding process may lose great clients and partners. People always prefer to have simplified operations, which artificial intelligence technology offers in the form of a streamlined business onboarding system. 

Final words 

Companies can utilize artificial intelligence technology to streamline the process to onboard a business. Automated checks of artificial intelligence within the streamlined business onboarding process enable organizations to become partners with successful organizations and attract more clients.

Users always prefer a streamlined onboarding verification process which is the result of artificial intelligence. AI has revolutionized the onboarding process and verification protocols for organizations.

Manual verification and data collection are now things of the past, and businesses utilize remote services, digital means, and automated techniques to streamline all operations, such as document collection, analysis, and verification. 

AI image detection: Types, applications, and future trendsAI image detection: Types, applications, and future trends

AI image detection: Types, applications, and future trends

This technology is being used to identify fake photos. An AI-powered tool called Photoshop Detector can recognize and detect a variety of objects, patterns, pictures, and more. The system uses a lot of data, objects, or photos to learn for this goal. In this manner, the system will use its observations and learnings to identify the object and photographs.

Additionally, the picture Photoshop detector offers itself as a safe substitute for current security tools and procedures, especially when combined with cutting-edge AI software and machine learning technology.

Faster examination of the provided data is made possible by the addition of advanced tools, which increase the technology’s overall accuracy and efficiency. Additionally, the technology makes it possible for numerous platforms and regulated businesses to protect their systems.  

Role of ,earning 

Human brains are used to finding a specific object in an image. We humans can do this at any time without thinking for a while. But for computers, this task is not that easy. This is the reason tech companies are training systems with artificial intelligence to perform tasks like humans without even thinking.

To train the system, it is important to provide it with various examples or samples of the object. In short, the system needs labeled images to learn about the objects, their size, shapes, and everything. There is not a specific number of images provided to the system but it is observed that the more pictures, the better will be the learning. 

Moreover, it is also crucial to show that specific object in a variety of places and sizes. So that if the image is displaced, the system will be able to spot it instantly in different conditions as well.

Computer vision for scalable document processing
We’ll talk about some challenges we face regularly from a customer service point of view, and how we’re using computer vision.
AI image detection: Types, applications, and future trends

What is image classification?

This process is all about labeling objects in an image and separating them in several categories. For instance, if Google is asked to search for pictures of cats, then it will show a plethora of images, including real-time images, illustrations, and drawings. 

It is an advanced form of image detection in which AI will look for different images, identify objects, and sort them all in different categories.

How Does AI Image Detection Work?

The standard image detection process through AI undergoes a well-defined sequence of operations.

The successful identification of patterns by AI models requires a massive collection of images that maintain proper photo identifications. Image processing starts by resizing images before normalization, which adds to quality enhancement for better analysis performance.

Subsequent to training CNN models successfully extract essential image features that include color information as well as texture detail and regions of defined edges from the input.

During its training phase the AI model receives instruction from supervised machine learning as it also undertakes unsupervised machine learning procedures. After training, the model automatically examines pictures to determine objects and then organizes them according to observed patterns.

As AI systems operate they enhance their performance through the process of processing new data together with feedback from users.

Types of AI image detection

An image can be analyzed in a variety of ways because a single image has several aspects to consider. Here are some of the types of image detection that can be considered while detecting an image: 

Identifying objects

Different things in an image can be found using this kind of detection. The AI image detector recognizes pertinent things in the image based on the context after learning from the samples.

For example, the detector may identify items related to a room, including the bed, clock, study table, fan, and more, provided the image of the room is taken into consideration. Additionally, people in robotics, security, and other fields use this kind of technology to take appropriate subsequent action.  

Facial examination

The purpose of this kind of facial recognition is to highlight certain facial features, such as the eyes and nose, among others. For verification, the algorithm then contrasts these properties with those found in the database.

In addition to being utilized for security, this kind of detection is widely employed to unlock phones. Occasionally, it can also identify a person’s gender, age, and emotions.  

Setting analysis

This kind of technology examines the scene’s overall context rather than just the elements. For instance, the system might display a picture of a park. The grass, swings, people, and weather will all be recognized by the system.

Exception monitoring

Unusual patterns in an image can be found with this detecting method. Healthcare facilities mostly employ this technology to analyze MRIs and X-rays. The algorithm can identify anomalous growths, like tumors, or objects in these reports. 

Additionally, this approach is able to identify superfluous objects throughout the full context. Consider the following scenario: someone leaves the luggage in the waiting room. When someone leaves something on the property, the system will detect it and notify the user.  

Text analysis

This kind of detection program can analyze, recognize, and transform text in photos into editable format. Physical documents and license plates can be read by this technology, which can then be edited as needed. To better understand the image, it can also be used to translate the text that is included in it.

Applications of deep learning in healthcare
Jonathan Rubin, Senior Scientist at Philips Research, outlines various applications of ML & DL in healthcare, emphasizing their unique benefits.
AI image detection: Types, applications, and future trends

Future trends in AI image detection

AI image detection demonstrates quick advancements in its fields of research. Several new trends guide AI image detection technology into its future development direction.

Integration with Augmented Reality (AR) and Virtual Reality (VR)

Image detection powered by AI technology improves AR/VR products, which cater to the gaming industry as well as education institutions and healthcare centers.

Companies can leverage AI-operated image detection through AR technology to establish digital fitting rooms.

Edge computing for faster processing

Technology advances allow AI models to work optimally on edge devices including smartphones and drones for instant image processing.

Faster AI applications become possible as they do not rely heavily on cloud processing.

AI-driven image editing and enhancement

Self-executing AI technology enhances images by applying enhancement methods and restores old photos to produce authentic photo content.

The AI-driven editing applications save time in the workflow of professional imaging experts and designers.

Enhanced medical imaging and diagnosis

Medical image detection systems operated by AI will reach higher accuracy rates in the upcoming years to enable physicians to detect diseases earlier and design suitable treatment solutions for patients.

Doctors conducting remote diagnosis with telemedicine will utilize AI platforms to examine medical images that assist clinical evaluations.

Conclusion

This powerful technology allows computers to identify objects in an image. It can be used for various purposes and in various fields to detect images, such as healthcare, retail, self-driving, and more. With time, the technology is expected to grow more in terms of better detection of images to enhance automation in various industries. 

How recommender systems support social learning in companiesHow recommender systems support social learning in companies

How recommender systems support social learning in companies

What do the streaming service Netflix, the business platform LinkedIn, and the dating portal Tinder have in common? All three use so-called recommender systems (RS).

RS can suggest exactly the right series for an evening of binge-watching. They show candidates for expanding your own business network who are dealing with the same topics.

Or they recommend potential partners who are suitable for a long-term relationship or for a nice evening for those looking. In the area of learning, and especially corporate learning, they can take existing e-learning platforms to a whole new level and provide valuable didactic support. And they can create the basis for forms of social learning.

Recommender systems are software solutions that suggest movies and series, potential dating partners, shopping products, the next online course, and other things that will most likely interest users. RS, therefore, intervenes in the human decision-making process and can guide it and even motivate it in the first place.

In the learning and corporate learning sector, recommender solutions are no longer a novelty, at least in theory. RS can be the technological basis for adaptive learning systems, which can be used to adapt content and teaching methods to the specific needs of learners, therefore creating the conditions for successful knowledge and skills development.

But RS can do even more. They can lay the foundation for successful collaborative learning, in which learners work together for their mutual benefit. How can such systems specifically support learners and trainers? And why are recommender systems much more worthwhile in corporate learning than on dating platforms like Tinder? I will show this in the following article.

Personalized and non-personalized recommendations

Not every recommendation is based on a recommender system that uses machine learning algorithms. A top 10 book recommendation from a newspaper editorial team or the top 3 most-watched series on a streaming platform is not the result of a recommender system.

Recommendations from ML-based solutions differ in that they provide personalized and, therefore, tailored suggestions that take into account the individual needs of users. Non-personalized recommendations, such as the aforementioned top 10 books and top 3 series, simply reflect trends but are not personalized suggestions.

Recommender systems work with various types of user data. These include explicit entries, which are collected via an online query, for example. The query may ask for the age of a user, his or her interests and previous experiences, gender, origin, and goals.

On the other hand, recommender systems also work with implicit data that results from usage behavior, such as previously viewed films and series (e.g., from streaming providers), past transactions (e.g., on e-commerce platforms), swiped people (e.g., on dating portals) or even completed online courses (e.g., e-learning).

Based on such data, recommendation systems can use algorithms to generate a list of suitable suggestions for each individual user. These results are based on a relevance assessment, e.g., an evaluation of the probability that a suggestion would match a user’s interests.

The recommender algorithms are optimized by feedback from the users themselves. This, in turn, includes implicit data (e.g., accepting or ignoring a suggestion) and explicit data (e.g., star ratings of suggestions and products).



How recommender systems support social learning in companies

Why are recommender systems so successful?

Recommender systems have been used for many years to serve the individual needs of users. The success of such systems is based on the realization that people like to rely on the recommendations of others, especially when making (every day) decisions.

No matter whether it’s a hotel for the next vacation, the next series for binge-watching at the weekend, or a pair of matching trousers for the summer. We are happy to be guided by other people (friends, family, influencers, like-minded people, role models, etc.) in our decisions and rely on their judgments. We are social beings and can often rely on the fact that what others (e.g. from our peer group) like and benefit from could also benefit and please us.

The aim of recommenders is to simulate precisely this recommendation behavior. The more relevant recommendations an algorithm provides, the more users’ trust in the recommender system grows.

This insight can be put to excellent use in corporate learning to present learners with the right content and, therefore, offer everyone an individually tailored and targeted learning path. In addition, RS can also suggest the right learning partner (partner, mentor, or tutor).

The use of recommender systems in social learning

There are some recommender solutions that provide valuable support for important goals and projects. Not only do they recommend learning content, but they also suggest the right learning partners to help each other master upcoming tasks. This is how forms of social learning are made possible.

Social learning is generally about bringing people together in some way for learning and training purposes. But people don’t always have to work on a task together in parallel to benefit from social learning. Sharing what you have learned and presenting it to others can also be categorized under this heading.

There are roughly two types of recommender systems used in corporate and social learning. Item-to-people recommender systems suggest to users which task they should work on next or which course might suit their interests and previous knowledge.

People-to-people recommender systems (sometimes also referred to as peer-to-peer recommender systems), on the other hand, suggest people, e.g. for collaborative tasks or for tutoring and mentoring.

AI agents: Automation and intelligent assistance (2025 guide)
AI agents are intelligent software entities designed to operate autonomously and achieve specific goals.
How recommender systems support social learning in companies

Use cases for people-to-people recommender systems

The right learning partner

The use of automation technologies in corporate learning sometimes results in learners working in isolation and finding themselves in monologic learning situations because tasks that were previously performed by humans can now be carried out by AI solutions.

This includes, for example, providing help with problem-solving or giving feedback after a task has been completed. However, automation technologies such as people-to-people recommender systems can also be used to avoid this problem.

To celebrate learning successes together and protect against learning frustrations, motivate each other, and exchange ideas regularly, you need the right learning partner. Based on explicit and implicit data, such as age, previous knowledge, interests and completed learning content and progress, RS can suggest the right companion for each learner.

In doing so, the individual data is prioritized: for example, when selecting a learning partner, is previous learning success more important in the context than age or previous knowledge? What type of learning partner is needed in a particular situation? A mentor or a tutor or something completely different?

When it comes to initiating learning partnerships, so-called people-to-people reciprocal recommenders (1-to-1) are usually used. The special feature of this is that both potential learning partners must decide in favor of each other (!). This approach is comparable to a dating platform.

A learning partnership is only formed if both partners believe that it really makes sense based on selected characteristics and is desired by both. The decision for a learning partnership is, therefore, based on reciprocity.

The right team

To prevent learning isolation, collaborative learning settings are useful. For example, tasks are conceivable for which one or more learning partners are necessary. If other learners are needed to complete the task, the learner asks others for support. A recommender system then suggests the right learning partner(s) based on explicit and/or implicit data.

Although, on the one hand, the learning partnership is only temporary and, on the other hand, the knowledge available here is crucial for achieving the goal, and soft skills (e.g., previous experience with other learners) may not need to be taken into account, the use of a people-to-people reciprocal recommender is also recommended here so that no one is assigned to someone as a learning partner without consent.

The right tutor or mentor

If a learner gets stuck on a task and can’t get any further, a recommender system can suggest a tutor or mentor to them. This could be another learner who has already completed this task and is more advanced.

This learner now assists the learner seeking help as a tutor and helps them with the next step (peer tutoring). On the basis of learning progress and/or previous knowledge, a (peer) learner can be proposed as a tutor to help someone take the next step.

It is also possible, however, for the recommender system to select from a pool of mentors and make suggestions. Here, too, it makes sense to use a people-to-people reciprocal recommender because, on the one hand, mentors should not be overburdened, and, on the other hand, learners should be able to choose their mentors based on certain aspects.

On the mentor side, there is also a challenge that one colleague once called “the Tinder problem.” This should prevent highly qualified and “popular” mentors from being recommended too often and, therefore, being overwhelmed by the number of suggested contacts. This risk can be minimized by using data that provides information about a mentor’s workload.

Learners, in turn, should also be able to decide who becomes a constant companion based on their needs. These can be, for example, characteristics such as availability (is the mentor available at all?

Is he or she available for time-consuming mentoring or rather only for short sessions when a work step is stuck?) or the popularity of the mentor (reviews by previous mentees)? Is the mentor really an expert in a particular field and suitable for helping with a specific step in the process, or are they more of an expert in a different area?

With the help of people-to-people recommender systems, it is possible for companies to create a (global) learning network within their organization. Employees from different departments and locations can meet to learn.

As great as the benefits of recommender systems are in the learning context and for social learning, there are some important things to consider before and during the development and implementation of an RS.

The data (e.g., behavioral, demographic, psychographic, and geographic data) is used to create a user profile in order to present users with customized content based on segmentation.

Prompt engineering: How to talk to AIs like ChatGPT?
This article serves as a primer on prompt engineering, delving into the array of techniques used to control LLMs.
How recommender systems support social learning in companies

What to watch out for

Because recommender systems collect and process user data, the utmost caution and sensitivity in handling this data is essential. The data (e.g., behavioral, demographic, psychographic, and geographic data) is used to create a user profile in order to present users with customized content based on segmentation.

It is essential to comply with the current legislation in your country regarding the handling of this sensitive data and to be transparent with your users about what data you collect and process from them.

Since AI works on the basis of algorithms that learn from existing data, there is a risk that this data will lead to systematic prejudices. For example, if the learning algorithms are based on historical data that shows gender- or ethnically-based inequalities, these distortions can be reproduced and reinforced in learning systems, for example.

This can lead to unfair treatment and discrimination against learners. There is a need to recognize these distortions and to take effective measures to compensate for or eliminate such biases in the algorithms.

In addition, there is the so-called “cold start problem”: for new users, there is often no data available to identify similar users. Suitable recommendations (item-to-people and people-to-people) are, therefore, particularly difficult at the beginning and not yet individualized, which can have a fundamentally negative impact on the quality of the recommendations given. Meanwhile, however, there are some solutions for how the cold start problem can be mastered technologically (see further reading: Dacrema et al.).

And why are people-to-people (reciprocal) recommenders more worthwhile for corporate learning than for Tinder?

The technology-supported search for a partner for life and an ideal learning partner are not so dissimilar and are based on the same technological solutions. Dating platforms such as Tinder advertise themselves as a way to find the right partner.

Successful social learning requires the right partner (hard and soft skills). Both searches are based on (reciprocal) people-to-people recommender systems. Ideally, however, a good recommender system is only used once in dating. After all, the goal of most dating platforms is to find the right partner as quickly as possible. Once you’ve found your soul mate, you don’t really need the recommender system anymore.

When it comes to learning, however, we are all a bit more ambitious. In corporate learning, we proclaim the imperative of “lifelong learning,” which is why recommender solutions are much more durable here than in the case of Tinder and Co.

The effort to support one’s own educational and learning processes with a recommender solution is much more worthwhile here because learning together is twice as much fun. So, when it comes to our learning partners, we are all polygamous in the end.

Used and recommended literature

Da Silva, F.L., Slodkowski, B.K., da Silva, K.K.A. et al. (2023). A systematic literature review on educational recommender systems for teaching and learning: research trends, limitations and opportunities. Educ Inf Technol 28, 3289–3328. https://doi.org/10.1007/s10639-022-11341-9

Dacrema, M.F., Cantador, I., Fernández-Tobías, I., Berkovsky, S., Cremonesi, P. (2022). Design and Evaluation of Cross-Domain Recommender Systems. In: Ricci, F., Rokach, L., Shapira, B. (eds) Recommender Systems Handbook. Springer, New York, NY. https://doi.org/10.1007/978-1-0716-2197-4_13

Koprinska, I., Yacef, K. (2022). People-to-People Reciprocal Recommenders. In: Ricci, F., Rokach, L., Shapira, B. (eds) Recommender Systems Handbook. Springer, New York, NY. https://doi.org/10.1007/978-1-0716-2197-4_11

AI agents: The future of automation and intelligent assistance (2025 guide)AI agents: The future of automation and intelligent assistance (2025 guide)

AI agents: The future of automation and intelligent assistance (2025 guide)

Imagine having a personal assistant who can not only schedule your appointments and send emails but also proactively anticipate your needs, learn your preferences, and complete complex tasks on your behalf.

That’s the promise of AI agents — intelligent software entities designed to operate autonomously and achieve specific goals.

What are AI agents?

In simple terms, an AI agent is a computer program that can perceive its environment, make decisions, and take actions to achieve a defined objective. They’re like digital employees, capable of handling tasks ranging from simple reminders to complex problem-solving.

Prompt engineering: How to talk to AIs like ChatGPT?
This article serves as a primer on prompt engineering, delving into the array of techniques used to control LLMs.
AI agents: The future of automation and intelligent assistance (2025 guide)

Key characteristics of AI agents

  • Perception: Agents can sense their environment through sensors (like cameras, microphones, or data feeds). Think of it like our senses: sight, hearing, touch, etc., that give us information about the world around us.
  • Decision-making: Based on their perception, agents use AI algorithms to make informed decisions. This is like our brain processing information and deciding what to do next.
  • Action: Agents can perform actions in their environment, such as sending emails, making purchases, or controlling devices. This is like our bodies carrying out the actions our brain decides upon.
  • Autonomy: Agents can operate independently without constant human intervention. They can learn from their experiences and adapt to changing circumstances. This is similar to how we learn and become more independent over time.

Types of AI agents

  • Simple reflex agents: These agents react directly to their current perception. Like a thermostat, they turn on the heat when it’s cold and turn it off when it’s warm.
  • Model-based reflex agents: These agents maintain an internal model of the world, allowing them to make decisions based on past experiences. Imagine a self-driving car using a map to navigate.
  • Goal-based agents: These agents have specific goals they are trying to achieve. They make decisions based on how close they are to reaching their objective. Think of a robot trying to solve a maze.
  • Utility-based agents: These agents try to maximize their “utility” or happiness. They consider multiple factors and choose the action that will lead to the best overall outcome. Imagine an AI agent managing your finances, trying to maximize your returns while minimizing risk.

Analogies for understanding AI agents

  • A self-driving car: It perceives its surroundings (other cars, pedestrians, traffic lights), makes decisions (accelerate, brake, turn), and takes actions (controls the steering wheel, brakes, and accelerator).
  • A smart thermostat: It senses the temperature, makes decisions (turns on/off the heating/cooling), and takes action (controls the HVAC system).
  • A personal assistant: They perceive your schedule, make decisions (schedule meetings, send reminders), and take actionturns (send emails, make phone calls).



AI agents: The future of automation and intelligent assistance (2025 guide)

Future uses of AI agents

The future of AI agents is brimming with possibilities:

  • Personalized education: AI tutors that adapt to each student’s learning style and pace.
  • Healthcare management: AI agents that monitor patients’ health, schedule appointments, and provide personalized health advice.
  • Smart homes and cities: AI agents that optimize energy consumption, manage traffic flow, and enhance public safety.
  • Complex problem solving: AI agents that can collaborate with humans to tackle complex scientific, economic, and social challenges.

Challenges and Considerations:

While the potential of AI agents is immense, there are challenges to address:

  • Ethical considerations: Ensuring agents make fair and unbiased decisions.
  • Safety and reliability: Making sure agents operate safely and reliably in complex environments.
  • Transparency and explainability: Understanding how agents make decisions.

Conclusion

AI agents represent a significant step towards a more automated and intelligent future. By understanding their capabilities and addressing the associated challenges, we can unlock their full potential and create a world where AI agents work alongside us to make our lives easier, more productive, and more fulfilling.


Check out the event calendar for 2025 and see where we’ll be throughout the year.

Get your ticket today and network with like-minded individuals.

AI agents: The future of automation and intelligent assistance (2025 guide)

Prompt engineering: How to talk to AIs like ChatGPT?Prompt engineering: How to talk to AIs like ChatGPT?

Prompt engineering: How to talk to AIs like ChatGPT?

It’s challenging to meet someone who hasn’t heard about GPT and other similar models this year. These Large Language Models (LLMs) signify a groundbreaking shift in the domains of machine learning and artificial intelligence. A field that remained obscure for most of its history is now an integral part of daily life for a vast segment of the global population, with tools like ChatGPT.

As a researcher dedicated to this field for over four years, I have extensively used these tools, particularly this year. This journey has greatly deepened my understanding of LLMs and the art of prompt engineering. Consequently, this article serves as a primer on prompt engineering, delving into the array of techniques used to control LLMs.

What are prompts and prompt engineering?

Prompt engineering is the strategic creation prompts for pre-trained models like GPT, BERT, and others; prompts describe what we request the model to do. This process aims to steer these models towards generating a specific behavior that we seek.

Successful prompt engineering hinges on meticulously defining the prompt with appropriate examples, relevant context, and clear directives. It demands a profound understanding of the model’s underlying mechanisms and the nature of the problem at hand.

This knowledge is crucial to ensure that the examples incorporated in the prompt are as representative and varied as possible, closely mirroring the real-world distribution of input-output pairs that characterize the problem.

Consider the simple task of translating text from English to French. Achieving this through prompt engineering is remarkably straightforward. One needs a pre-trained model, such as GPT-4, and a well-crafted prompt.

This prompt should 1) outline the task 2) provide a few example sentences with their translations 3) include the specific sentence requiring translation, as demonstrated in the figure below. That’s it! GPT-4, already trained on an enormous corpus, inherently grasps the concept of translation. It merely requires the correct prompt to apply its learned skills.

Prompt engineering: How to talk to AIs like ChatGPT?
Example of a prompt

Zero, one, and few-shot prompts

Prompt engineering: How to talk to AIs like ChatGPT?

Prompts can be classified in various ways. Take, for instance, zero-shot, one-shot, and few-shot prompts, which correspond to the number of examples provided to the model for task execution.

In zero-shot settings, the Large Language Model (LLM) receives only the task description and input. For example, in the figure, we ask it to translate ‘cheese’. These zero-shot prompts already demonstrate impressive performance, as evidenced by this particular paper.

Despite their efficacy, I generally avoid zero-shot prompts for a couple of reasons. Firstly, adding just a few examples can significantly enhance performance, and you don’t need many, as highlighted in another paper.

More crucially, by incorporating a few examples, you not only clarify the task for the model but also illustrate the desired response format. In zero-shot translation, the model might respond with, “Sure, here is the translation of your text…”.

However, with a few-shot approach, it learns more effectively that a text followed by “=>” indicates that the subsequent content should directly be the translation. This nuance is useful, especially when seeking to precisely control the model’s output for commercial applications.

Dynamic prompts

Utilizing tools like langchain, we can also create dynamic prompts. In the example mentioned earlier, this means that ‘cheese’ becomes a variable, alterable to any word we wish to translate. This seemingly straightforward concept paves the way for complex systems, where parts of the prompt are either removed or added in response to user interaction.

For instance, a dynamic prompt for a chatbot might incorporate elements from the ongoing conversation with a user. This approach enhances the bot’s ability to understand and react more appropriately to the context of the discussion.

Similarly, a prompt initially designed for text generation can be dynamically adapted to revise a given text, ensuring it aligns with previously generated content. This flexibility allows for more nuanced and context-aware interactions, significantly enriching the user experience and simplifying developments.

Prompt engineering: How to talk to AIs like ChatGPT?

Prompt chaining 

Prompts can be employed sequentially, a technique known as prompt chaining. In this method, a prompt used to respond to a user query might incorporate the summary of a previous query as a variable. This summary itself could be the output of a separate prompt. This layered approach allows for more complex and context-aware responses, as each prompt builds upon the output of the previous one.

Prompt engineering: How to talk to AIs like ChatGPT?

Continuous prompt

This is a more sophisticated approach that uses the fundamentals of LLMs. Prompts consist of words, and these words are processed by Large Language Models (LLMs) through word embeddings, which are essentially numerical representations of words. Consequently, rather than solely relying on textual prompts, we can employ an optimization algorithm like Stochastic Gradient Descent directly on the prompt embedding representation. This method essentially refines the input, as opposed to fine-tuning the model itself. For example, in this article, they enhance the model’s performance by concatenating a fine-tuned prompt with a standard prompt.

Prompt engineering: How to talk to AIs like ChatGPT?

Only-shot prompt?

This method, while not officially named, stems from insights in a research paper that posits that task descriptions and directives in prompts are largely useless.

The paper illustrates that prompts can contain important, irrelevant, or even contradictory information without significantly impacting the outcome, provided there are sufficient high-quality examples. This was a lesson I learned through experience prior to discovering the paper.

I used to craft complex prompts laden more with directives and task descriptions than examples. However, at some point, I experimented with using only examples, omitting directives and task descriptions entirely, and observed no notable difference.

Essentially, my detailed instructions were superfluous; the model prioritized the examples. This can be explained because the exemples are more isomorphic to the final output to generate. The model’s attention mechanism, thus, focuses more on examples than on any other aspect of the prompt.

Prompt engineering: How to talk to AIs like ChatGPT?

Chain of thought prompts

Chain of Thought (CoT) prompts involve structuring examples not as simple “X -> Y” transformations but as “X -> Deliberating on X -> Y”. This format guides the model to engage in a thought process about X before arriving at the final answer Y. If you’re curious about the nuances of this approach, there’s a detailed paper on the subject.

However, it’s crucial to remember that most Modern Large Language Models (LLMs) are autoregressive. This means that while the “X -> Deliberating on X -> Y” structure is effective, a format like “X -> Y -> Explain why Y is the answer” is less so.

In the latter case, the model has already determined Y and will then concoct a rationale for its choice, which can lead to flawed or even comical reasoning. Recognizing the autoregressive nature of LLMs is essential for efficient prompt engineering.

Further research has expanded on the CoT concept. More sophisticated strategies include self-consistency, which generates multiple CoT responses and selects the best one (paper here), and the Tree of Thoughts approach, which accommodates non-linear thinking, as explored in several papers (see 1 & 2). These advancements underscore the evolving complexity and great potential of prompt engineering.

Prompt engineering: How to talk to AIs like ChatGPT?

More & more

The world of prompting techniques is rapidly evolving, making it a challenge to stay current. While it’s impossible to cover every new development in this article, here’s a quick overview of other notable techniques:

  1. Self-ask: This method trains the model to ask itself follow-up questions about specific details of a problem, enhancing its ability to answer the original question more precisely.
  2.  Meta-Prompting: Here, the model engages in a dialogue with itself, critiquing its own thought process, aiming to produce a more coherent outcome.
  3.  Least to Most: This approach teaches the model to deconstruct a complex problem into smaller sub-problems, facilitating a more effective solution-finding process.
  4.  Persona/Role Prompting: In this technique, the model is instructed to assume a specific role or personality, altering its responses accordingly.

Through this article, I hope to have introduced you to some of the more innovative and lesser-known prompt engineering techniques. The creativity and ingenuity in current research indicate that we are just beginning to uncover the full potential of these models and the prompts we use to control them.

IBM’s Leadership in Generative AI: Insights from Manav, CTO of IBM CanadaIBM’s Leadership in Generative AI: Insights from Manav, CTO of IBM Canada

At the recent Generative AI Summit in Toronto, I had the opportunity to sit down with Manav Gupta, the CTO from IBM Canada to explore the company’s current work in generative AI and explore their vision for the future. Here are the key insights from our conversation, highlighting IBM’s ecosystem leadership, industry impact, and strategies to navigate challenges in the generative AI landscape.


IBM’s position in the generative AI landscape

Manav began by emphasizing IBM’s commitment to ensuring that enterprises own their AI agenda. He stressed the importance of AI being open and accessible to organizations, individuals, and societies to foster growth. To this end, IBM leads with Watson X, a comprehensive platform that serves as both a model garden and a prompt lab. Watson X allows users to leverage IBM-supplied models, third-party models, or even fine-tune their own models for deployment on their preferred cloud or on-premises infrastructure.

One of the standout features of IBM’s approach is its focus on AI governance. Manav highlighted the critical need for enterprises to ensure that the AI they deploy is free from biases, hate speech, and other ethical concerns. IBM’s governance platform is designed to address these issues, ensuring that generative AI outputs are safe and unbiased.

The transformative impact of generative AI

When asked about the impact of generative AI across industries, Manav was unequivocal in his belief that this technology will touch every sector. He cited estimates that generative AI could add up to 3.5 basis points to global GDP, a staggering figure that underscores its potential. Industries such as banking, healthcare, telecommunications, and the public sector are poised to benefit significantly. 

  • Banking and Financial Services: Streamlining workflows and enhancing decision-making.
  • Public Sector and Healthcare: Unlocking data-driven efficiencies and improving service delivery.
  • Telecommunications: Transforming customer interactions and operational processes.

Manav explained that wherever there is a large corpus of data and existing workflows, generative AI can unlock human potential by automating mundane tasks and allowing employees to focus on higher-value activities.

Challenges in deploying generative AI

Despite the immense potential, Manav acknowledged that deploying generative AI solutions is not without its challenges. One of the primary hurdles is client maturity. Many organizations are still in the experimental phase, trying to understand both the opportunities and the risks associated with this technology. Additionally, integrating generative AI with existing data systems is a significant challenge. Enterprises often have high-quality data, but it is locked in silos across departments such as finance, HR, and procurement. Accessing and unifying this data in a timely manner is a complex task.

Another major challenge is the resource intensity of generative AI. The specialized hardware required to run these models is expensive and often in short supply, leading to long lead times for deployment.

Future trends in generative AI

Looking ahead, Manav foresees several key trends in the generative AI market. He predicts that models will continue to improve, with a shift from large language models (LLMs) to more fit-for-purpose smaller models. These smaller models, often referred to as small language models (SLMs), are more efficient and tailored to specific use cases. Manav also highlighted the rise of agentic AI, where AI systems will have greater autonomy to execute tasks on behalf of humans, particularly in high-value areas like software engineering and testing.

Another trend is the increasing importance of multi-modal models, which can process and generate different types of data, such as images and text. Manav gave an example of how enterprises could use multi-modal models to analyze images and make decisions based on that analysis, opening up new possibilities for automation and efficiency.


Key takeaways from Manav’s presentation

Manav concluded our interview by summarizing the key takeaways from his summit presentation.

  1. Be an AI value creator, not just a consumer. Don’t just use AI—figure out how to make it work for you.
  2. Start with models you can trust. Whether it’s IBM’s Granite models or open-source alternatives, experiment with reliable AI solutions.
  3. Don’t treat AI governance as an afterthought. Privacy, security, and responsible AI should be built into the foundation of your AI strategy.

Watch Manav’s presentation at the Generative AI Summit in Toronto.


IBM’s Granite models and InstructLab

During his presentation, Manav also delved into IBM’s Granite models, a series of open-source foundation models designed for enterprise use. These models, which include specialized versions for time series and geospatial data, are trained on vast amounts of data and are optimized for performance and cost-efficiency.

IBM has also developed InstructLab, a novel methodology for adding enterprise data to LLMs without the need for extensive fine-tuning. This approach allows organizations to iteratively train models on their specific data, ensuring that the AI remains relevant and accurate for their unique use cases.


Conclusion

Manav’s insights underscore IBM’s leadership in the generative AI space, particularly in addressing the challenges of scalability, integration, and governance. As enterprises continue to explore the potential of generative AI, IBM’s Watson X platform and Granite models offer a robust foundation for innovation. With a focus on trust, transparency, and ethical AI, IBM is well-positioned to help organizations navigate the complexities of this transformative technology.

The Generative AI Summit series from the AI Accelerator Institute provides a platform for thought leaders like Manav to share their vision for the future of AI.

InvestAI: Europe’s £200 billion move to lead in AI innovationInvestAI: Europe’s £200 billion move to lead in AI innovation


InvestAI: Europe’s £200 billion move to lead in AI innovation

At the artificial intelligence (AI) Action Summit in Paris on February 11, President Ursula von der Leyen introduced InvestAI, a groundbreaking initiative to mobilize €200 billion for AI investment.

Central to this effort is a €20 billion European fund dedicated to AI gigafactories—large-scale infrastructure designed to foster open, collaborative development of the most advanced AI models and position Europe as a global AI leader.

President Ursula von der Leyen stated:

“AI has the potential to revolutionize healthcare, accelerate research, and enhance Europe’s competitiveness. We want AI to be a force for both good and growth. Our European approach—rooted in openness, collaboration, and top-tier talent—lays the foundation, but we need to go further.

“That’s why, in partnership with Member States and industry, we are mobilizing unprecedented capital through InvestAI for European AI gigafactories.

This public-private initiative, akin to a ‘CERN for AI,’ will empower scientists and businesses of all sizes—not just the largest—to develop cutting-edge AI models and solidify Europe’s position as an AI powerhouse.”

European Investment Bank President Nadia Calviño added:

“The EIB Group, in collaboration with the European Commission, is reinforcing its support for AI—a key driver of European innovation and productivity.”

AI gigafactories: Scaling Europe’s AI capabilities

InvestAI will fund four AI gigafactories across the EU to train the next generation of complex, large-scale AI models. These facilities will provide the computing power needed to drive breakthroughs in medicine and scientific research. Each gigafactory will house approximately 100,000 next-generation AI chips—four times more than today’s AI hubs.

As the world’s largest public-private initiative for trustworthy AI, these gigafactories will follow Europe’s cooperative, open innovation model, focusing on industrial and mission-critical AI applications.

The goal is to ensure that companies of all sizes—not just industry giants—have access to high-performance computing to develop the AI technologies of the future.

Chief AI Officer Summit | Berlin
Uniting AI leaders at Chief AI Officer Summit Berlin to streamline strategy & maximize value on the journey to production & scale.
InvestAI: Europe’s £200 billion move to lead in AI innovation

A strategic investment model

InvestAI will operate through a layered fund structure, offering varying risk and return profiles. The EU budget will help derisk private investments, while initial funding will come from existing EU digital programs like Digital Europe, Horizon Europe, and InvestEU.

Member States can also contribute by allocating Cohesion funds. AI gigafactory financing will blend grants and equity, serving as a key pilot under the Competitiveness Compass strategy for high-priority technologies.

This initiative builds on the Commission’s €10 billion AI Factories program, launched in December, which has already unlocked more than ten times that amount in private investment. The upcoming announcement of five additional AI Factories will expand Europe’s AI capabilities further, offering start-ups and industries broad access to supercomputing resources.

Next steps

Alongside InvestAI, the European Commission is rolling out multiple initiatives to accelerate AI innovation across the continent:

  • Funding for generative AI through Horizon Europe and the Digital Europe program.
  • Expanding Europe’s AI talent pool via education, training, and workforce upskilling.
  • Boosting AI start-ups and scale-ups through venture capital and equity support.
  • Enhancing Common European Data Spaces, providing critical datasets to train and refine AI models.
  • Launching ‘GenAI4EU’, fostering AI-driven solutions across 14 industrial sectors, including health, biotech, manufacturing, mobility, and climate.

Additionally, the Commission will establish a European AI Research Council to pool resources and maximize Europe’s AI potential. Later this year, the ‘Apply AI’ initiative will further drive AI adoption in key industries.

With InvestAI, Europe is pushing to lead in AI innovation, ensuring that all companies—from start-ups to industry leaders—can build an AI-powered future.


Have a look at our events in the calendar below and join us in expanding Europe’s AI conversation:

InvestAI: Europe’s £200 billion move to lead in AI innovation