What is AI-enshittification?
AI-enshittification refers to the practice by some AI companies of deliberately reducing the quality and capabilities of their AI services over time, often after an initial period of hype and growth. The term plays on the concept of “enshittification” coined by Cory Doctorow to describe the general trend of technology products becoming worse for users as companies prioritize growth and monetization.
In the context of AI, enshittification typically follows this pattern:
- An AI company launches a powerful, useful AI model and generates excitement
- The service attracts many users with an affordable subscription model
- The company collects data on user interactions and use cases
- Service quality is reduced through tactics like altering system prompts, limiting compute resources, and decreasing output quality
- The degraded service is maintained as users are expected to continue subscriptions
- The company shifts focus to selling high-priced enterprise offerings based on the initial model
Through this process, everyday users drawn in by the initial capabilities and affordable pricing end up with an AI assistant that becomes progressively less helpful and reliable over time. Meanwhile, the AI company has achieved its goals of gathering usage data, developing its underlying model, and shifting to a more profitable business model.
How does the AI-enshittification process work?
The AI-enshittification process relies on a “bait-and-switch” approach – luring in users with powerful AI capabilities at first, then gradually degrading the product once a user base is established.
Here are the key stages:
1. Launch an impressive AI model
- Develop an AI model with strong performance on key tasks like question-answering, analysis, and content generation
- Invest heavily in model training and engineering to create best-in-class capabilities
- Generate buzz and excitement with public demos, beta access, and media coverage
2. Grow user base with accessible pricing
- Release the AI service to the general public with an affordable subscription model
- Price the service low enough to attract mass adoption from students, knowledge workers, and small businesses
- Emphasize ease-of-use and openness to a wide variety of tasks and prompts
3. Gather data on user behavior
- Log user interactions including prompts, settings, and output evaluations
- Analyze usage patterns to determine the most common use cases and workflows
- Identify what types of queries and tasks push the boundaries of the model’s knowledge and capabilities
4. Reduce service quality
- Alter the base prompts and instructions to make the model’s outputs blander and more restricted
- Add filters and blocks for controversial topics and limit the model’s ability to engage with certain tasks
- Reduce the amount of compute resources powering each user query, leading to shallower, less insightful responses
- Decrease the context window so the model has less awareness of prior conversation
5. Maintain degraded service for remaining users
- Continue collecting subscription revenue from users who remain active despite the quality decline
- Avoid making major improvements or restoring capabilities to keep costs down
- Deprioritize the consumer service in favor of shifting resources to enterprise offerings
6. Shift to high-priced business offerings
- Develop specialized versions of the initial model fine-tuned for specific business use cases like sales, HR, and financial analysis
- Repackage the degraded consumer product as a “free tier” or “community edition” to maintain some public visibility
- Focus sales, marketing and development effort on lucrative enterprise deals based on large-scale deployments and add-on services
- Leverage data and insights from the consumer offering to improve enterprise products
Through this multi-stage process, AI companies can exploit consumer excitement around powerful new technologies to quickly build a user base and gather valuable data. However, once the initial goals are met, incentives shift away from maintaining a high-quality, low-cost mass market product. The result for consumers is enshittification – a once-magical AI assistant slowly becomes unreliable, limited and frustrating to use.
Why do AI companies engage in enshittification?
There are several reasons why AI companies may choose to degrade their consumer products over time:
Unsustainable pricing and costs
- Offering highly capable AI models to the public at low prices is expensive due to training costs, compute requirements, and ongoing development
- Pricing pressure from competing services creates a “race to the bottom” that rewards sacrificing quality for affordability and growth
- Venture capital backing incentivizes rapid user acquisition over long-term product sustainability
Valuable data collection
- User interactions with the AI models provide highly valuable data for improving model performance, identifying key use cases, and training specialized versions
- There is little incentive to maintain quality once enough data is collected, as the value has already been extracted
- User data helps companies transition to enterprise offerings by understanding the most commercially relevant applications
Reputational risks from open-ended models
- Highly capable language models run the risk of generating offensive, incorrect, or problematic content in response to adversarial prompts
- Companies may choose to limit their models’ abilities in order to avoid PR incidents and backlash
- Restricting the models also makes them appear more controlled and suitable for business use cases with less unpredictability
Transition to enterprise focus
- The most lucrative commercial opportunity for AI is in selling specialized enterprise solutions to large businesses
- Maintaining a high-quality, general-purpose consumer AI product takes focus and resources away from the higher-margin enterprise business
- AI companies have an incentive to push consumers towards pricier business offerings or accept a degraded “free tier” experience
Lack of user power
- Most users of AI writing assistants and chatbots are not paying customers and have little direct leverage over the product
- Even for paying subscribers, the cost and difficulty of switching services creates lock-in that enables quality decline
- The complexity of AI systems makes it hard for users to directly observe or measure capability reductions
The AI-enshittification process reflects misaligned incentives between AI companies and their users. While users want a consistently high-quality and affordable product, companies are motivated to pursue the highest-margin opportunities while neglecting mass-market offerings.
Without stronger feedback loops or user ownership models, product degradation is likely to continue.
What are the signs of AI-enshittification?
Sign of AI-Enshittification | Description | Impact on Users |
---|---|---|
Increasing errors and inconsistencies | Responses contain more factual inaccuracies and contradictions compared to past performance | Users must spend more time fact-checking and verifying outputs |
Reduced output quality and complexity | Responses become shorter, simpler, and less insightful over time | Users receive less value and insights from the AI assistant |
Stricter content filters | The model refuses to engage with an increasing range of topics deemed controversial or sensitive | Users are limited in the types of queries and tasks they can perform |
Slower response times | Queries take longer to process and generate outputs compared to past performance | Users experience reduced productivity and increased frustration |
Reduced customization options | User settings for controlling model verbosity, creativity, and other parameters are removed or have less impact | Users have less control over the AI’s behavior and output style |
Paywalled features and upselling | Advanced features and capabilities that were previously included are moved behind a paywall | Users must pay more to access the same functionality or switch to a different provider |
Lack of transparency and communication | Release notes and documentation provide less detail on model changes and capability adjustments | Users are left in the dark about the reasons behind performance changes and have less trust in the provider |
For users of AI writing assistants and chatbots, there are several warning signs that may indicate a product is undergoing enshittification:
Increasing errors and inconsistencies
- The model seems to “forget” information from earlier in a conversation or fails to track context
- Outputs are more likely to be off-topic or fail to fully address the query
Reduced output quality and complexity
- The model seems less able to engage in multi-step reasoning or tackle complex topics
- Writing style becomes blander and more generic, with less flair and personality
Stricter content filters
- Attempts to bypass filters with reworded prompts are more frequently rejected
- Outputs are more likely to include disclaimers and “safe mode” warnings
Slower response times
- The model becomes unresponsive or produces timeout errors more frequently
- Throughput and the number of queries per minute decreases
Reduced customization options
- Prompt templates and options for different writing styles are eliminated
- The model becomes less flexible and harder to adapt to specific use cases and workflows
Paywalled features and upselling
- The free or low-cost tier becomes more limited and pushes users to upgrade to pricier plans
- Prompts and UI elements increasingly reference enterprise offerings and business use cases
Lack of transparency and communication
- User questions and support requests about reduced performance go unanswered
- Official communication emphasizes new features and enterprise offerings over addressing consumer product issues
While not all of these signs necessarily indicate intentional enshittification, a pattern of declining quality and increasing restrictions should raise concerns for users.
How does AI-enshittification affect users?
The enshittification of AI products can have profound negative impacts on users who have come to rely on these tools for daily tasks and workflows. This phenomenon manifests in various ways, each contributing to a deteriorating user experience.
Productivity and output quality often take a significant hit as AI models become less reliable and capable. Users find themselves spending more time fact-checking, editing, and rewriting outputs, which defeats the purpose of using AI to streamline work processes.
The degraded writing quality can reflect poorly on users who have incorporated AI into their content creation processes, potentially damaging their professional reputation.
Moreover, inconsistent and error-prone responses undermine trust in AI-assisted research and analysis, making it difficult for users to rely on these tools for critical tasks.
The financial implications of AI-enshittification are also noteworthy. Users who have invested time and resources into learning and integrating a particular AI tool face high switching costs when quality declines.
The prospect of migrating prompts, templates, and workflows to a new platform is both disruptive and time-consuming. Paying users may feel trapped into continuing subscriptions to preserve access to past work and familiar interfaces, creating a sense of being held hostage by a deteriorating service.
Frustration and wasted effort become common experiences for users of enshittified AI products.
Attempting to use an AI assistant that has become unreliable and limited is not only frustrating but also demoralizing. Users must constantly rephrase queries and retry prompts to get acceptable responses, leading to a significant waste of time and energy.
The mental models and strategies users have developed to interact with a particular AI become obsolete as performance degrades, rendering their learned skills less valuable.
Collaborative workflows and team dynamics can also suffer from AI-enshittification. Organizations that have built processes around specific AI tools face coordination challenges when quality becomes inconsistent.
Teams must realign on new tools and interaction patterns to maintain productivity, which can be a complex and time-consuming process. Projects and initiatives that relied heavily on machine intelligence may be undermined by degraded AI model performance, potentially leading to missed deadlines or subpar results.
Perhaps most critically, AI-enshittification erodes trust and confidence in AI technologies. Users who have put faith in AI companies’ marketing claims and invested in learning their products feel betrayed when quality declines.
This awareness breeds cynicism and suspicion towards new AI tools and services, potentially slowing adoption of genuinely beneficial technologies.
The declining performance of once-promising AI tools reinforces concerns about AI hype outpacing reality and highlights the limitations of current language models.
The enshittification of AI products robs users of opportunities for growth and innovation.
As AI capabilities degrade, users miss out on the potential benefits of more advanced language models and techniques. Those who stick with enshittified products settle for suboptimal results and slower progress compared to those with access to better tools.
The degradation can widen the gap between consumer and enterprise applications, concentrating the benefits of AI among large businesses with resources to access or develop superior alternatives.
Is AI-enshittification inevitable for all AI services?
While the incentives and market dynamics that enable AI-enshittification are powerful, not all AI companies are destined to degrade their products over time. There are several factors that could counteract enshittification and sustain high-quality consumer AI offerings:
User ownership and governance
- AI services that give users a stake in the platform through ownership shares, tokens, or credits could better align incentives
- User-owned AI co-ops and decentralized autonomous organizations (DAOs) could prioritize product quality and sustainability
- Governance structures that give users a voice in product decisions could counteract short-term profit-seeking
Open-source and community-driven development
- AI models and tools that are developed in the open with community contributions are less vulnerable to unilateral enshittification
- Open-source projects can fork and continue development if the original maintainers abandon or degrade the product
- Community-driven AI projects are more likely to prioritize user interests and long-term viability over short-term growth
Alternative business models
- AI companies that monetize through value-added services, API access, or enterprise licensing rather than consumer subscriptions may have less incentive to degrade the core product
- Offering specialized AI models for specific use cases could generate revenue without sacrificing general-purpose model quality
- Crowdfunding, grants, and sponsorships could provide alternative funding sources for consumer-focused AI projects
Increased competition and differentiation
- As the AI market matures and more players enter, companies may differentiate on product quality and reliability rather than racing to the bottom on price
- Niche AI providers could thrive by serving specific user communities with high standards and specialized needs
- Increased competition could give users more options and reduce lock-in, making enshittification a riskier strategy
Regulatory and ethical constraints
- Governments and industry bodies could develop standards and regulations around AI product quality, transparency, and user protection
- Ethical frameworks and guidelines could discourage practices like deliberate capability degradation and bait-and-switch tactics
- Public awareness and consumer protection campaigns could increase scrutiny on AI companies and reward those that prioritize user interests
Long-term reputational considerations
- As users become more savvy and discerning, AI companies may see maintaining product quality as essential for building long-term brand trust and loyalty
- Companies that develop a reputation for enshittification may struggle to attract top talent and partners
- Consistent quality and transparency could become a competitive advantage in the AI market
While the challenges posed by AI-enshittification are significant, a combination of user empowerment, open development practices, and long-term thinking could help sustain high-quality AI products. As the AI ecosystem matures, users will likely become more discerning and demand greater transparency and accountability from providers.
How can users protect themselves from AI-enshittification?
AI-enshittification poses significant risks to users who have come to rely on these tools for various tasks. However, there are several strategies and practices that can help mitigate the impact and ensure more sustainable access to high-quality AI tools.
Diversification and Open-Source Prioritization
One of the most effective ways to protect against AI-enshittification is to diversify your AI tools and prioritize open-source solutions. This approach reduces dependency on a single provider and leverages community-driven development.
Strategy | Implementation |
---|---|
Diversify AI tools | – Use multiple AI platforms for different tasks – Regularly assess performance and reliability – Spread work across different providers |
Prioritize open-source | – Favor community-driven AI tools and models – Contribute to open-source projects – Participate in user forums and governance discussions |
Vigilance and Preparedness
Staying informed about product changes and having a clear exit strategy are crucial for protecting yourself against sudden quality degradation or unfavorable policy shifts.
Action | Details |
---|---|
Monitor changes | – Review release notes and official communications – Watch for pricing and feature availability changes – Proactively communicate with AI providers |
Prepare exit strategy | – Regularly export and backup important data – Develop a migration plan for alternative tools – Stay informed about new AI offerings in the market |
Supporting User-Aligned Models and Advocacy
Supporting business models that prioritize user interests and advocating for better standards can help shape a more user-friendly AI landscape.
Approach | Implementation |
---|---|
Support user-aligned models | – Prioritize transparent, user-focused AI services – Pay for quality, sustainable AI products – Provide feedback to user-centric AI companies |
Advocate for standards | – Support development of industry best practices – Participate in public discussions on AI ethics – Hold AI companies accountable through reviews and feedback |
Continuous Learning and Skill Development
Investing in your own AI literacy and skills is a powerful way to navigate the evolving AI landscape and make informed decisions.
Focus Area | Actions |
---|---|
AI literacy | – Improve understanding of AI systems and limitations – Stay updated with latest AI research and practices |
Skill development | – Learn prompt engineering and model evaluation – Develop output analysis skills – Practice critical thinking in AI interactions |
This proactive approach not only protects individual interests but also contributes to shaping a more responsible and user-centric AI industry.