Technical News

The new cursor bugbot is designed to save the room coders of themselves

But the competitive landscape of coding platforms assisted by the AI is crowded. Windsurf, folds and pool startups also sell AI code generation tools to developers. Cline is an open popular alternative. Github’s co -pilot, which has been developed in collaboration with OPENAI, is described as a “pairs programmer” which automatically complies the code and offers debugging assistance.

Most of these code editors are counting on a combination of AI models built by large technological companies, including Openai, Google and Anthropic. For example, Cursor is built on the Visual Studio code, an open source editor from Microsoft, and Cursor users generate code by pressing AI models like Google Gemini, Deepseek and Claude Sonnet d’Anthropic.

Several developers tell Wired that they are now performing the anthropic coding assistant, Claude Code, alongside Cursor (or instead). Since May, Claude Code has offered various debugging options. He can analyze error messages, solve problems step by step, suggest specific changes and execute unit tests in the code.

All this could ask the question: how Buggy East Code written by AI in relation to the code written by fallible humans? Earlier this week, the REFLIT IA code generation tool would have become ROGUE and made changes to the code of a user despite the fact that the project is in a “code gel” or a break. He ended up deleting the entire user database. The Founder and CEO of folder said on X that the incident was “unacceptable and should never be possible”. And yet it was. This is an extreme case, but even small bugs can wreak havoc for coders.

Anysphen had no clear answer to the question of whether the AI code requires more debugging of the AI code. Kaplan maintains that it is “orthogonal to the fact that people are very coding.” Even if all the code is written by a human, it is always very likely that there will be bugs, he says.

The engineer of products Anysphere Rohan Varma estimates that on professional software teams, up to 30 to 40% of the code is generated by AI. This complies with the estimates shared by other companies; Google, for example, said that around 30% of the company code is now suggested by AI and examined by human developers. Most organizations always make human engineers responsible for checking the code before its deployment. In particular, a recent randomized control trial with 16 experienced coders suggested that they needed 19% longer To perform tasks only when they were not allowed to use AI tools.

Bugbot is intended to overcome this. “The heads of the AI of our greatest customers are looking for the next step with the cursor,” explains Varma. “The first step was:” Increase the speed of our teams, to move everyone faster. Now that they move faster, it is: “How can we make sure that we do not inform new problems, we do not break things?” “He also pointed out that Bugbot is designed to identify specific types of bugs – logical bugs, safety problems and other on -board cases.

An incident that validated Bugbot for the Anysphere team: a few months ago, Anysphere’s (humans) coders realized that they had not received Bugbot comments on their code for a few hours. Bugbot had fallen. The engineers of Anysphere began to investigate the problem and found the request for a traction responsible for the breakdown.

There, in the newspapers, they saw that Bugbot had commented on the request for traction, warning a human engineer that if they made this change, it would break the Bugbot service. The tool had properly predicted its own disappearance. In the end, it was a human who broke him.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button