AI in the Open
- Jacek Olender
- 15 mar
- 4 minut(y) czytania
Zaktualizowano: 3 dni temu
W ostatnim swoim numerze, The Economist (13.03.2025) pisze o Manus AI, systemie opracowanym w Chinach, który samodzielnie wykonuje zadania online. Przeczytaj uproszczone streszczenie artykułu i odpowiedź na pytania. Porównaj swoje odpowiedzi do przykładowych zamieszczonych pod streszczeniem.
What are the main differences between the way Manus AI was released and how Western AI companies introduce their AI tools? Why do these differences matter?
Why might some companies feel pressured to release AI tools faster, even if they are not fully tested? What risks could this create?
What solution does the article suggest?
Simplified Summary of Get Used to It (The Economist, Mar 13th, 2025)

Manus AI: A New Kind of Artificial Intelligence
A new AI tool called Manus AI has been released, and it is different from most other AI systems. It can complete online tasks by itself, without asking a human for permission. For example, it can create social media accounts, write reports, and book tickets for a trip.
Manus AI was created by a company in China. The company says it is the world’s first AI that can "turn your thoughts into actions." However, many AI researchers around the world have been working on similar technology in private. The difference is that Manus AI was released for public use, while other companies are still testing their versions.
At first, Manus AI sounds impressive. But when people use it, they find many problems. It gives confusing answers, takes too long to complete tasks, and sometimes gets stuck in an endless loop. This suggests that the creators focused on being first rather than making sure the AI works perfectly.
Big AI companies in the United States and Europe take a different approach. They test their AI systems carefully before releasing them. For example, OpenAI waited nine months before fully releasing GPT-2, and Google delayed its chatbot Bard for more than two years to make sure it was safe.
Many AI companies are especially careful with "agentic" AI, like Manus. This type of AI can solve problems on its own without a human guiding every step. While this is exciting, it could also be dangerous if the AI makes mistakes or is used for the wrong reasons. That’s why companies like Google and Anthropic are still testing their AI assistants before releasing them.
The release of Manus AI puts pressure on Western AI companies. If they take too long to test their AI, they might fall behind. But if they release AI too quickly, they risk safety problems.
Some people in the United States worry that China is moving ahead in AI technology. However, Manus AI is not as impressive as DeepSeek, another Chinese AI company that created a very powerful and affordable AI model. In reality, any company—American, Chinese, or from another country—could create a tool like Manus, as long as they are willing to take risks.
Right now, there is no proof that Manus AI is dangerous. However, the way AI safety works is changing. In the past, big AI companies tested their models for a long time before releasing them. Now, AI is being tested in public. Instead of waiting to make AI safe before releasing it, companies and governments will have to watch AI systems in real time and remove them if they cause problems.
Whether people like it or not, Manus AI shows that AI development is now happening in the open.
Answers:
What are the main differences between the way Manus AI was released and how Western AI companies introduce their AI tools? Why do these differences matter?
Manus AI was released quickly without full testing. Its creators wanted to be first instead of making sure everything worked well. Western AI companies, like OpenAI and Google, usually test their AI for a long time before releasing it. They want to make sure it is safe and reliable. These differences matter because releasing AI too fast can cause mistakes, security problems, or harm to users. But waiting too long means a company might fall behind competitors.
Why might some companies feel pressured to release AI tools faster, even if they are not fully tested? What risks could this create?
Companies feel pressured because they don’t want to lose the competition. If one company releases AI quickly, others feel they must do the same to stay in the market. Also, some people in the United States worry about China moving ahead in AI technology.
Releasing AI too fast can create many risks. The AI might not work well, give wrong information, or be used in the wrong way. If the AI makes bad decisions, it could cause harm.
What solution does the article suggest?
The article says that traditional safety testing is no longer enough. Instead of testing AI for a long time before releasing it, companies and governments should watch how AI works in real life. If an AI system creates problems, they should intervene quickly and remove it if necessary. This means AI development will now happen in public, not just in labs.
Comments