Developers lose the concentration 1,200 times a day – how MCP could change this

Do you want smarter information in your reception box? Sign up for our weekly newsletters to obtain only what matters for business managers, data and security managers. Subscribe now
Software developers spend most of their time not write code; Recent research in the industry has revealed that real coding represents little as 16% of developers’ work hours, the others consumed by operational and support tasks. While engineering teams are forced to “do more with less” and CEOs boast of the quantity of their code base written by AI, a question remains: what is to optimize the remaining 84% of the tasks on which engineers work?
Keep the developers where they are the most productive
A major culprit of developers’ productivity is the change of context: the constant jump between the ever-increasing range of tools and platforms necessary to create and send software. A study by the Harvard Business Review revealed that the average digital worker changes between applications and websites almost 1,200 times a day. And each interruption is important. The University of California found that it takes about 23 minutes to focus on a single interruption entirely, and sometimes worse, because almost 30% of the interrupted tasks never resume. Context switching is in fact at the center of Dora, one of the most popular performance software development frameworks.
At a time when AI-focused companies are trying to allow their employees to do more with less, beyond “simply” giving them access to large language models (LLM), certain trends are emerging. For example, Jarrod Ruhland, principal engineer at Brex, hypothesizes that “developers offer their highest value when concentrated in their integrated development environment (IDE)”. In this spirit, he decided to find new ways to get there, and the new anthropic protocol could be one of the keys.
MCP: a protocol to bring the context to the IDE
Coding assistants, such as IDEs powered by LLM such as the cursor, the co -pilot and the board board, are at the center of a renaissance of the developer. Their adoption speed is invisible. The cursor has become the SaaS with the fastest growth in history, reaching $ 100 million RAI within 12 months of launch, and 70% of fortune companies 500 use Microsoft Copilot.
The AI scale reached its limits
Electricity ceilings, increase in token costs and inference delays restart the AI company. Join our exclusive fair to discover how best the teams are:
- Transform energy into a strategic advantage
- Effective inference architecting for real debit gains
- Unlock a competitive return on investment with sustainable AI systems
Secure your place to stay in advance: https://bit.ly/4mwgngo
But these coding assistants were only limited to the context of the code base, which could help developers write code faster, but could not help change context. A new protocol addresses this problem: the context protocol of the model (MCP). Released in November 2024 by Anthropic, this is an open standard developed to facilitate integration between AI systems, in particular LLM -based tools and external tools and data sources. The protocol is so popular that there has been a 500% increase in new MCP servers in the last 6 months, with around 7 million downloads in June,
One of the most impactful applications of MCP is its ability to connect coding assistants to directly to the tools that developers rely every day, rationalize workflows and considerably reduce contextual switching.
Take the development of features as an example. Traditionally, this implies bounce back between several systems: read the ticket in a project tracker, looking at a conversation with a teammate for clarification, seeking the documentation of the details of the API and, finally, by opening the IDE to start coding. Each step lives in a different tab, requiring mental changes that slow down developers.
With MCP and Modern AI assistants like the Claude d’Anthropic, this whole process can occur inside the publisher.
For example, the implementation of a functionality in a coding assistant becomes:
The same principle can apply to many other engineers workflow, for example an incident response to Sres could look like:
Nothing new under the sun
We have already seen this model. Over the past decade, Slack has transformed the productivity of the workplace by becoming a center for hundreds of applications, allowing employees to manage a wide range of tasks without leaving the cat window. The Slack platform has reduced context change in daily work flows.
Riot Games, for example, connected around 1,000 Slack applications, and the engineers experienced a 27% reduction in the time required to test and iterate code, a time of 22% faster to identify new bugs and a 24% increase in the launch rate of features; All were allocated to the rationalization of workflows and reducing the friction of tool switching.
Now, a similar transformation occurs in the development of software, with AI assistants and their MCP integrations used as a bridge to all these external tools. Indeed, the IDE could become the new all-in-one command center for engineers, just as Slack was for general knowledge workers.
MCP may not be ready for the company
MCP is a relatively nascent standard, for example, WISEM MCP security does not have an integrated authentication or authorization model, based on external implementations that are still evolving, there is also an ambiguity around identity and audit – the protocol does not clearly distinguish if an action has been triggered by a user or AI itself, which makes access to the accounts Additional personalized solutions. Lori Macvittie, a distinguished engineer and evangelist chief of the CTO F5 Networks office, says that MCP “breaks the basic security hypotheses that we have been holding for a long time”.
Another practical limitation occurs when too many MCP tools or servers are used simultaneously, for example, inside a coding assistant. Each MCP server announces a list of tools, with descriptions and parameters, which the AI model must consider. The flooding of the model with dozens of tools available can submerge its context window. Performance degrades significantly as the number of tools increases with certain IDE integrations have imposed hard limits (around 40 tools in the IDE cursor, or ~ 20 tools for OPENAI agent) to prevent the invitation from inflating beyond what the model can manage
Finally, there is no sophisticated way so that the tools are discovered automatically or suggested contextually beyond listing them all, so that developers often have to switch them manually or manage which tools are active so that things work smoothly. Referring to this example of Riot games installing 1,000 slack applications, we can see how it could be unfit for the use of the company.
Less swivel flesh, more software
The last decade has taught us the value of bringing it work to the worker, the Slack channels that kill updates to “Inbox Zero” email methodologies and unified platform engineering dashboards. Now, with AI in our toolbox, we have the possibility of allowing developers to be more productive. Suppose Slack has become the Center for Commercial Communication.
In this case, coding assistants are well placed to become the center of software creation, not only where the code is written, but where the whole context and the collaborators merge. By keeping the developers in their flow, we remove the constant mental lag that has tormented engineering productivity.
For any organization that depends on the delivery of software, look carefully how your developers spend their day; You could be surprised by what you find.
Sylvain Kalache leads AI laboratories to Rootly.




