10 min read

Custom GPTs - What is the Point?

Custom GPTs - What is the Point?

In November 2023, OpenAI introduced the GPT Builder, a tool designed to simplify the creation of custom GPTs (https://openai.com/blog/introducing-gpts). The platform empowers users to develop what are essentially scoped mini-apps tailored to specific use cases or ideas. The standout feature of the GPT Builder is its 'no code' approach. Users can customize their GPTs using plain language instructions, eliminating the need for traditional coding. The interface makes it possible for people without programming expertise to create and tailor GPT applications to their requirements.

Creating a custom GPT involves a few straightforward steps:

  1. Decide the Scope: Decide what your GPT should do.
  2. Add Data to the 'Knowledge Base': Add relevant information stored as files in a knowledge base, refered to as 'knowledge'.
  3. Add Instructions: Add guidelines for your GPT via the instructions (plain text) interface on how to use the knowledge base it has and how to interact with users etc.
  4. Provide Feedback via Chat Interface: After your GPT starts working, you can talk to it and give feedback. The chat will update the GPT with your feedback.
  5. Add Actions (API Calls): If you want your GPT to do more than just talk, like fetching data from the internet or integrating with other software, you add 'actions'. These are API calls that allow your GPT to interact with external services and databases.

You can also require the GPT to do some operations using existing OpenAI plugins, for example you can use the Browse plugin so it can look up things on the net.

By following these steps you can build a custom GPT. You can reserve your custom GPT for your own use or offer it to others via the GPT Store. The OpenAI GPT Store, launched a week ago, offers a marketplace where, for those with a paid OpenAI account, user-created GPTs are available for browsing.

The simplicity of creating GPTs brings up questions regarding the value proposition of custom GPTs and the GPT store. Using terminology from the proprietary software and venture capital worlds, one could argue that most GPTs are 'indefensible,' implying that it's fairly easy to replicate the functionality of someone else's GPT, possibly in a very short time. This raises several immediate questions - what value do custom GPTs really offer and how can you differentiate from other GPTs? And further, is it possible for your GPT to be, even a little, 'defensible'?

I've been experimenting with GPTs that offer publishing-related functions, such as extracting metadata from a manuscript and creating descriptions for accessible images. My following thoughts are in the context of these functionalities. It's important to note that my focus has not been on developing GPTs for education-only purposes or for gaming etc.

Utility and Audience

My first question is what types of GPT could be useful? What is a productive functional zone for a GPT?

We could imagine all sort of functions but one of the most critical design constraints is the OpenAI environment itself. OpenAI limits distribution of custom GPTs to their paid users via the OpenAI website. This narrows the useful functional scope significantly. So the question "what should I build?" is tightly coupled to the question of who you would expect to use it within the OpenAI environment itself?

For example, I created a GPT to extract metadata from academic papers and output JSON or JATS. But how useful is this really? Few journals would expect staff to manually copy content into my GPT interface via the store for conversion and copy back the results one paper at a time. This cumbersome workflow likely only benefits small journals that require JATS but are unfamiliar with the format or do not have the resources to produce it. How many of this narrow market would use the GPT Store (available only to paid users) to create JATS? I'm guessing not many.

So the initial design question isn't just about the functional scope, but also who would use your GPT given the environment constraints?

I do think offering custom GPTs via the store has value even if the GPT would not be used a lot via the store. You can use it to publish ideas that could be developed further, promoting them through your network. I've created several GPT demos, for example, that perform some interesting tasks while also directing the users to this blog. I've also found that sharing descriptions and links to the GPTs on LinkedIn has proven to generate interest from relevant folks.

So GPTs can also act as prototypes and/or as a way to generate attention and connections around the concepts. For me, catalyzing connections for future collaborations is of particular interest. I'm looking to further develop my existing prototypes primarily as a means to attract partners to work together on bringing these concepts to full execution outside of the OpenAI ecosystem (using open source models).

Outside of this the overall usefulness of a functional publishing tool within the GPT Store is somewhat limited, though it is not without its uses.

APIs as Differentiators

API (Application Programming Interface) integrations significantly amplify the capabilities of custom GPTs, potentially turning them into more dynamic and interactive tools. By integrating APIs ('actions'), a GPT can access external services and databases, essentially enabling it to pull in real-time data, interact with other software, or even execute specific tasks outside its internal knowledge base.

For example, if you have a service that can be accessed via an API, such as a database of scientific articles, weather information, or a stock market feed, you can integrate this with your GPT. This means that when someone uses your GPT to ask a question, the GPT can reference services via APIs to provide updated or specific information that it wouldn't have by itself. API integrations can also facilitate other types of actions. For example, a GPT that helps with shopping lists could integrate with a grocery store's API to not only suggest items but also place an order directly.

Integrations can significantly broaden your GPT's usefulness.

From my observation of the GPT store some of the more popular GPTs do exactly this. In the 'research' category there are, for example, several custom GPTs that integrate directly with Semantic Scholar's recommendation engine API. For those unfamiliar, Semantic Scholar is a website that researchers use to seek information on specific topics. It employs AI to deliver a list of relevant ('adjacent') articles. This functionality is also accessible via an API. GPTs in the store utilizing this API essentially offer a chat interface, allowing researchers to conduct their searches through a chat rather than a traditional search interface (which in itself raises interesting questions about the comparative effectiveness of search versus chat strategies).

The popularity of these 'Semantic Scholar GPTs' suggests that a significant part of the GPT store's value may lie in offering an affordable way to create a chat interface for various APIs. While this functionality can be achieved outside the GPT store (see below), using the store might be a strategic method to reach certain market segments.

So what actually differentiates your GPT?

Providing external API integrations is potentially the most differentiating feature. But that is possibly only true if you own the API. If you provide a public API then others can leverage that in their GPT and might provide a better experience than you do. Do organizations in this situation see custom GPT integrations made by others as a interesting growth opportunity for their API or do they see this as undermining their offer?

API's can be a major differentiator if you own the API, but if your API is public, or you don't use an API at all then what else can differentiate your GPT in the market place? It seems that since the technical barrier for competing with other GPTs has been removed, and if you have no proprietary API, whats left, in a single word, is curation. It's fundamentally about comprehending and meticulously shaping the nuances of specific use-cases to curate a GPT experience that resonates deeply with your intended audience. This primarily comes down to:

  • crafting instructions that provide highly refined responses to user requests
  • guiding the user through a process as determined by your instructions
  • carefully curating the knowledge base to provide better results that your competitors and instructing the GPT on what to access from the knowledge base and when
  • curating (via the explicit instructions or chat feedback you provide) when to access the web, what to access, and how to translate it to your user
  • testing the GPT and providing feedback via the chat interface to better craft the type of responses you want
  • and finally, and importantly, learning how to train the GPT to keep 'the conversation in scope'

This last point feels very important to me and one that I am currently struggling to master. It's crucial to recognize that as a curator, you don't have control over the user's input or the GPT's response. Your role is limited to influencing what the user is likely to ask and what the GPT is likely to respond with. If the conversation deviates into unrelated topics, the experience can quickly become less meaningful. This makes maintaining the conversation within a relevant scope particularly important.

Leveraging your insight and curating the user experience accordingly is going to help ensure the GPT provides value to your user rather than just mimicking a generic conversational agent.

Coupling Functions with Education

I think for the type of apps I'm trying to develop there is also value in incorporating an educational aspect into GPTs. For example, with a GPT like 'Accessible Images', I've combined its primary function with an educational role. This GPT can process an uploaded image, generate a description while considering the reader's level, and then format this description for HTML or EPUB. Alternatively, users can inquire about resources for creating accessible images or ask questions about best practices. This approach seems particularly beneficial as it allows users to execute a task while simultaneously learning about it.

Once again, if this does prove to be valuable to users then it is a result of curating the right scope and responses to provide a value-add educational experience.

Taking the GPT outside of the store

It's worth noting that it is possible to replicate GPT functionality by building (coding) your own service and using the OpenAI API. This liberates the model from the constraints of the OpenAI website.

With this approach, you could embed your 'GPT functionality' directly within your own website or business applications. This obviously opens the door to broader possibilities for real-world usage.

This highlights an important additional role for the GPT store - it can provide significant value for exploration and feedback when building toward a production application through other means. A custom GPT built and offered for use in the OpenAI ecosystem can offer a convenient way to validate concepts quickly and gather useful insights from others.

Data Privacy and Learning

Lastly, the issues of Data Privacy and Learning in creating custom GPTs remain complex and somewhat unresolved. Many organizations (including the majority of publishers I've spoken to about this) are hesitant to upload data to large proprietary AI models due to privacy concerns, particularly given the ambiguity around how data is used and stored. Trying to get to the bottom of it is tricky, OpenAI's vision documentation states that images processed by their vision features are not retained, which initially sounds good.

After an image has been processed by the model, it is deleted from OpenAI servers and not retained. We do not use data uploaded via the OpenAI API to train our models.
https://platform.openai.com/docs/guides/vision

However, while the above statement indicates that user-provided data is not used to train models, the OpenAI enterprise privacy statement seems to suggest otherwise:

OpenAI trains its models in two stages. First, we learn from a large amount of data. Then, we use data from ChatGPT users and human trainers to make sure the outputs are safe and accurate and to improve their general capabilities
https://openai.com/enterprise-privacy

This apparent contradiction presents a challenge in fully understanding OpenAI's approach to user-provided data.

The issue here is really that ambiguity like this surrounding data privacy policies and practices will in itself be a deterrent for many publishers. Clarity and unequivocal assurances in data privacy are essential. Without such transparency, there is a risk of fostering suspicion, potentially impeding wider adoption.

Data privacy is especially important when it comes to building and using custom GPT's and the GPT store. Presumably most GPT builders do want their GPT to improve over time through learning from user interactions and at first glance it seems OpenAI supports this. Within the GPT builder interface itself is the following option:

This option suggests that user interactions with a custom GPT can be used to refine and enhance the GPT models. If this is actually what it appears, it is good news as 'learning from conversations' is desirable for the ongoing improvement of your custom GPTs. However, it also brings up privacy concerns. If user data is integral to training and improving these models, how is user privacy safeguarded? The mechanics and privacy implications of this process are not entirely transparent.

Things that are missing

The OpenAI GPT Store ecosystem, in its current early stage, has many limitations. The Store's interface is very restricting, making it challenging to navigate and identify suitable apps. The absence of user reviews or a recommendation system further hinders this process. There is potential for improvement in how builders can customize the user interface for their users. Additionally, the integration of chat functionality into external systems would be a valuable enhancement. A more transparent and controlled approach to handling builder and user data is essential. The list is long.

However, considering that this is just the beginning, with only a week since its launch, there is also a long road ahead for progress and innovation. It is also worth noting they are not alone, others, such as AI Box, are exploring similar paths. But I am eager to observe the development of this platform and plan to continue exploring, learning, and sharing insights along the way.

Conclusion

I believe, based on a first week of the store opening, there is value in creating custom GPTs and offering them on the store. The key to differentiating no-code custom GPTs lies in their ability to address nuanced use-cases and provide unique user experiences, often enhanced through API integrations. Interestingly, this means the role of the creator shifts from technical development to careful curation of content and user interaction, ensuring meaningful and valuable exchanges.

It's also important to consider some additional benefits of custom GPTs such as rapid prototyping or collecting people around ideas that might lead to further explorations or collaborations.

However, while the potential and flexibility of custom GPTs are immense, challenges like data privacy and the balance between model improvement and user confidentiality remain crucial considerations.

As we continue to explore and understand the capabilities and implications of these tools, it's evident that the journey of custom GPTs is just beginning. I'm definitely along for the ride and will report as I go. The next, much promised, feature it seems is revenue sharing...I'm super interested to know how this will work. Stay tuned!

© Adam Hyde, 2024, CC-BY-SA
Image public domain, created by MidJourney from prompts by Adam.