Platform lockin is a thing.
Sometimes, it may be done intentionally to ensure that your customers remain on your platform. At other times, it is something that happens naturally as people build connections within a given system. I’m referring to the challenge of migration to a different platform after you’ve established a body of work or other substantial presence on a given online platform. Let’s consider two situations and whether large language model lockin will become a thing.
Traditional Web Platforms
In the Web 2.0 era, Flickr was the destination for photographers online. If you were a person who shared your photos online, you had some presence on Flickr. Eventually, other platforms emerged, and Flickr became less of an online destination.
Photographers started migrating to other platforms, but one of the challenges with that is that many of them left their Flickr work behind. Even if they had 5,000 or 10,000 photos on Flickr, those photos essentially became abandoned and were not visible on the new platform. Folks stopped interacting in Flickr groups and other related activities as they spent their time in new places.
Moving forward several years, social networks became daily destinations. Depending on who your audience was (friends, family, or clients), you would invest a significant amount of your time and share information on your chosen platforms.
Platforms fade for varying reasons. Sometimes, they never gain favor with a newer generation of folks. Sometimes they lose popularity due to the actions of the platform’s operator. When Facebook became less popular among younger generations and people started realizing the series of questionable privacy decisions made by Facebook and then Meta, many users began to consider leaving the platform. I’m one of those who did, but many folks stuck around.
Why did they stick around? It wasn’t often because they loved the platform. They stuck around because that’s where their friends and family were. They were members of established groups. Those connections would need to be rebuilt if one were to move to a new platform.
Tired: “I can’t quit Facebook because my friends are there.”
Wired: “I can’t leave [AI Model] because all my context is there.”
AI Models as Platforms
We’re entering a new era of online platforms, specifically the major large language models (LLMs). There are a few major players in this space. One is OpenAI with ChatGPT. Another is the Claude model from Anthropic. Google has Gemini. Meta is building its own AI models and has begun integrating them into Facebook and Instagram. Grok integrates with the X social network service.
At any point in time, you can look at a given purpose and identify which LLM might have a competitive edge for a specific purpose. For instance, many in the software development industry claim that Anthropic Claude has an edge over some of the other models currently available. ChatGPT is the most widely used for general purposes.
With any LLM, the quality of output is highly dependent on the quality of the prompt and the quality of what it knows about what you’re trying to ask. When these models first came on the scene in 2022–2023, their context window (how much they could remember and use as they provided their answer to you) was pretty short. It was generally limited to what you were providing as part of your prompt or your interaction in a given session.
Over time, these context windows got larger. In early 2025, we saw Google Gemini and then ChatGPT become the first mainstream models to remember all of your history and use it as part of the context when crafting its output for you.
When large language models first gained popularity, we heard of people discussing how prompt engineering was going to become a job title for those skilled at writing well-crafted prompts for LLMs. That term has now morphed, and we hear about context engineering. It’s all about being able to input the right information into the large language model to obtain the best possible results: those tailored to your specific needs and situation.
Where do you live? What industry are you in? Are you a business owner? Who are your clients, if so? What’s your economic situation? What do you do for fun? What’s your familial or relationship situation? All of this information can help a large language model provide a well-crafted answer that is tailored to you. What’s your computing and technology situation? Do you use Windows? Do you use a Macintosh? Are you an Android user or an iPhone person? These might be relevant to help the LLM give you the best possible results.
As LLMs begin to remember all of our history to use that information as context in providing answers, a new form of platform lockin develops. Suppose I use a given model, such as ChatGPT or Google Gemini, for a year and a half and provide it with all sorts of information about me. In that case, the model gains a lot of instrumental context that will positively influence the quality of the results it provides.
If I decide I want to hop over to a different model, that context is lost. I’m starting from scratch. I’ve lost a year and a half of context that won’t be part of the information used for the new model to give me results. As a consequence, that new model’s results might be inferior, not necessarily because the model itself is inferior, but because it’s lacking context.
As LLMs become more common, either as standalone experiences or integrated into other software we use, a potential platform lockin situation will develop because people will want to use the model that has all their context and history.
In an ideal world, there would be an open and portable data format, but we rarely see such ideal technology situations in online platform services. The mainstream population hasn’t cared about lockin with services such as Facebook or Instagram. Will they care with artificial intelligence model query history?
Leave a Reply