Leadership

Stay ahead of Agentic AI Insights: a Guide for SaaS companies

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability.

TL;DR: Key Takeaways

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability Session.

  • Map your current reporting challenges against proven automation frameworks
  • Identify where agentic processes can reduce manual work and errors
  • Outline a readiness score and roadmap for achieving quick wins

How our position on AI has changed

Prior to RSL, our options at Medium were to block AI companies manually (which is what we were doing) or join the Cloudflare initiative to charge or block AI crawlers (which wouldn’t allow us to accept payments for individual stories, only at the site-wide level).

To start, we’ve implemented the simplest version of this new RSL Standard, which prohibits AI companies from using your stories to train their AI models but allows them to summarize and link back to your writing in AI-generated search results. This is a direct extension of our current policy, which we launched in 2023.

Back then, we noted that there was an issue of basic fairness. AI companies were profiting off your writing and giving nothing back (other than an influx of AI-generated slop). So we put out a call for AI companies to offer consent, credit, and compensation. Behind the scenes, we tried to get major internet companies to work together. It’s wild to us that it took this long to get to a formal protocol. In our view, there was never a viable negotiating strategy unless we all band together. So we’re happy, finally, to support a formal, standardized way to for all content owners to tell AI companies the rights and restrictions on your content.

At Medium, we are trying to navigate this issue on your behalf and come to thoughtful default policies. Our original position in 2023 felt straightforward: Why would anyone participate in allowing AI companies access to their writing there was no value coming back to the community here? The situation has changed quite a bit and now there is some value coming back in the form of visitors to your writing and the potential for some financial compensation.

To that end, I want to talk to all of you about the current state of the credit that is coming with this standard (in the form of citations to and views on your writing), and what we think the upcoming options will be for financial compensation. If we can get clear on credit and compensation, then RSL gives us a way to define consent.

{{subscribe}}

Principles that support our mission, our writers, and our readers

One of the major use cases for AI companies right now is that they are acting as search engine replacements. Examples of this include: ChatGPT, Perplexity, Google Gemini. Originally, these AI search replacements were just offering up their own generated answers without giving any citations to the stories that they had trained on. But more recently, it’s become common that they both credit the source material (you) and drive clicks to that source material.

Medium is Humans First. We’ve found that leads to three principles that help us make decisions about artificial intelligence.

  1. Our mission to deepen understanding requires human stories that contain human thoughts, human emotion, and human experience.
  2. For our writers, we will protect the incentives that make it worthwhile for them to write and share their stories.
  3. For our readers, we will strive to give them agency to examine, influence, avoid, and override AI.
"We were drowning in data and manual processes. The risk of error was immense, and it was holding back our ability to deliver value to our clients. We needed a strategic leap, not just an incremental improvement."

How we apply these principles

So what we were asking AI companies for in terms of credit effectively means that we want them to help promote your writing and send you readers.

For example, earlier this year, ChatGPT was the fastest growing source of referral traffic for Medium writers. ChatGPT is about 1.3% as big of a source of readers as Google, and, importantly, those readers are 4x more likely to convert into Medium members.

This is an image caption

So far, OpenAI has also done the best job of giving content creators a lever for consent. In particular, they have separate rules for letting us block their agent for training on your writing while allowing it to include a credited citation to your writing in their search results. The RSL Standard will let us formalize those rules for all AI companies.

  • This might sound like splitting hairs if you aren’t up on the way that AI companies operate.
  • In the most simplistic view, there are often two parts.
  • The first part is the LLM, which is a giant statistical model of human writing representing trillions of data points, none of which can be cited back to source material.
  • In the ChatGPT case, we are blocking their ability to train their LLM on your writing because they are not offering any citation or compensation in return.

{{framework}}

Conclusion

Our mission and our business model at Medium are the same: to deepen understanding.For writers to succeed on Medium, for readers to enjoy Medium stories, and for Medium to continue, this requires real stories — human stories, written from human experience and with human wisdom.So then what is the value of artificial intelligence, and what are the pitfalls to avoid? Is there a way to use AI to deepen understanding, or help writers tell theirhumanstories? Two years ago,we thought the value to writers and readers was less than zero.

The AI companies had leached value from your writing without offering consent, credit, or compensation. Then they enabled a wave of spam that tried to replace your writing with hallucinated slop.As we’ve continued working on making Medium the best place to read and write,we’ve noticed and heard from our readers and writers that some use of AI is starting to be useful. We also are finding uses in our own work. In our recent surveys, more than half of Medium readers are using AI tools in some capacity.AI is a tool that exists in the world. We want to make sure that we explore all possible options to make Medium better for readers and writers. Here’s how we’re starting to do so with AI, both in the principles that we take into consideration and how we’re currently using AI at Medium already.

Next Steps: From Insight to Action

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability Session.

In under 60 minutes, our team will:
  • Map your current reporting challenges against proven automation frameworks
  • Identify where agentic processes can reduce manual work and errors
  • Outline a readiness score and roadmap for achieving quick wins

Next Steps: From Insight to Action

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability Session.

In under 60 minutes, our team will:
  • Map your current reporting challenges against proven automation frameworks
  • Identify where agentic processes can reduce manual work and errors
  • Outline a readiness score and roadmap for achieving quick wins

Related Blogs

See all Blogs
February 10, 2026
5
min read

How Agentic AI transforms insurance claims processing: a complete ROI analysis

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability.

How our position on AI has changed

Prior to RSL, our options at Medium were to block AI companies manually (which is what we were doing) or join the Cloudflare initiative to charge or block AI crawlers (which wouldn’t allow us to accept payments for individual stories, only at the site-wide level).

To start, we’ve implemented the simplest version of this new RSL Standard, which prohibits AI companies from using your stories to train their AI models but allows them to summarize and link back to your writing in AI-generated search results. This is a direct extension of our current policy, which we launched in 2023.

Back then, we noted that there was an issue of basic fairness. AI companies were profiting off your writing and giving nothing back (other than an influx of AI-generated slop). So we put out a call for AI companies to offer consent, credit, and compensation. Behind the scenes, we tried to get major internet companies to work together. It’s wild to us that it took this long to get to a formal protocol. In our view, there was never a viable negotiating strategy unless we all band together. So we’re happy, finally, to support a formal, standardized way to for all content owners to tell AI companies the rights and restrictions on your content.

At Medium, we are trying to navigate this issue on your behalf and come to thoughtful default policies. Our original position in 2023 felt straightforward: Why would anyone participate in allowing AI companies access to their writing there was no value coming back to the community here? The situation has changed quite a bit and now there is some value coming back in the form of visitors to your writing and the potential for some financial compensation.

To that end, I want to talk to all of you about the current state of the credit that is coming with this standard (in the form of citations to and views on your writing), and what we think the upcoming options will be for financial compensation. If we can get clear on credit and compensation, then RSL gives us a way to define consent.

{{subscribe}}

Principles that support our mission, our writers, and our readers

One of the major use cases for AI companies right now is that they are acting as search engine replacements. Examples of this include: ChatGPT, Perplexity, Google Gemini. Originally, these AI search replacements were just offering up their own generated answers without giving any citations to the stories that they had trained on. But more recently, it’s become common that they both credit the source material (you) and drive clicks to that source material.

Medium is Humans First. We’ve found that leads to three principles that help us make decisions about artificial intelligence.

  1. Our mission to deepen understanding requires human stories that contain human thoughts, human emotion, and human experience.
  2. For our writers, we will protect the incentives that make it worthwhile for them to write and share their stories.
  3. For our readers, we will strive to give them agency to examine, influence, avoid, and override AI.
"We were drowning in data and manual processes. The risk of error was immense, and it was holding back our ability to deliver value to our clients. We needed a strategic leap, not just an incremental improvement."

How we apply these principles

So what we were asking AI companies for in terms of credit effectively means that we want them to help promote your writing and send you readers.

For example, earlier this year, ChatGPT was the fastest growing source of referral traffic for Medium writers. ChatGPT is about 1.3% as big of a source of readers as Google, and, importantly, those readers are 4x more likely to convert into Medium members.

This is an image caption

So far, OpenAI has also done the best job of giving content creators a lever for consent. In particular, they have separate rules for letting us block their agent for training on your writing while allowing it to include a credited citation to your writing in their search results. The RSL Standard will let us formalize those rules for all AI companies.

  • This might sound like splitting hairs if you aren’t up on the way that AI companies operate.
  • In the most simplistic view, there are often two parts.
  • The first part is the LLM, which is a giant statistical model of human writing representing trillions of data points, none of which can be cited back to source material.
  • In the ChatGPT case, we are blocking their ability to train their LLM on your writing because they are not offering any citation or compensation in return.

{{framework}}

The other part is a context window where the AI retrieves a smaller amount of content in response to your search and has the LLM analyze it. These context windows do allow for credit in the form of citations and are specifically why ChatGPT is now sending so many readers to Medium.

Given that people publish on Medium to get readers, we think the obvious default stance is to allow a product like ChatGPT to use your writing in their AI-generated search results if they are sending significant traffic in return. We are hoping that Google will catch up to OpenAI— so far, traffic from Gemini AI summaries is very poor.

Evaluation Criteria Result Result Result
Touchless Claims 40% 40% 40%
Touchless Claims after 6m 65% 65% 65%
Exception Handling Human in the loop Human in the loop Human in the loop
Expected Future Accuracy 97.2% 97.2% 97.2%
Ease of Use Trained by project team Trained by project team Trained by project team

The AI companies should pay compensation

Now to the second part, about payment. The RSL Standard includes two options for negotiating financial compensation from AI companies.

In the best case scenario, the RSL Standard will lead to a direct payment to you based on the way AI companies use your writing. This future possibility comes because the standard also defines a separate non-profit RSL Collective agency to negotiate, collect, and distribute compensation on behalf of the entire internet. Think of it as similar to ASCAP, the musical rights organization that negotiates with venues, broadcasters, and streaming services on behalf of musicians.

We lean toward internet standards and think the RSL Collective is probably the better approach for the overall health of the internet. But we also understand that the full adoption of this may be a ways off.

In the meantime, the RSL Standard also allows for a simple way to indicate to AI companies that we are willing to negotiate. This part of the standard is simply a listing for a contact form.

We’d like to do this, do it transparently, and do this with the goal of passing the compensation on to you. In our original position, we called this Negotiating as a Service. Some fights are too small for individuals. But we think this one is only medium difficulty for a company like Medium.

As with credit in the form of traffic, the RSL Standard is flexible enough to allow individual writers to opt out. So here again, a request: If you think you would opt out, can you say why? Leave a response, I’ll be reading them.

Wherever you personally land, I hope you’ll agree that the RSL Standard (and the RSL Collective) represents a much-needed and meaningful step forward in clarifying the relationship between writers and AI companies. We see it as a framework that lets us answers some fundamental questions, and provides a mechanism for giving writers the consent, credit, and compensation they deserve.

Conclusion

Our mission and our business model at Medium are the same: to deepen understanding.For writers to succeed on Medium, for readers to enjoy Medium stories, and for Medium to continue, this requires real stories — human stories, written from human experience and with human wisdom.So then what is the value of artificial intelligence, and what are the pitfalls to avoid? Is there a way to use AI to deepen understanding, or help writers tell theirhumanstories? Two years ago,we thought the value to writers and readers was less than zero.

The AI companies had leached value from your writing without offering consent, credit, or compensation. Then they enabled a wave of spam that tried to replace your writing with hallucinated slop.As we’ve continued working on making Medium the best place to read and write,we’ve noticed and heard from our readers and writers that some use of AI is starting to be useful. We also are finding uses in our own work. In our recent surveys, more than half of Medium readers are using AI tools in some capacity.AI is a tool that exists in the world. We want to make sure that we explore all possible options to make Medium better for readers and writers. Here’s how we’re starting to do so with AI, both in the principles that we take into consideration and how we’re currently using AI at Medium already.

February 4, 2026
5
min read

Building the future of agentic automation in European Union: Masterclass

The results Medisolv achieved aren’t unique—they’re the product of a repeatable approach powered by Agentic AI. If you’re wondering how ready your organization is to unlock similar impact, the best next step is an Agentic Viability.

How our position on AI has changed

Prior to RSL, our options at Medium were to block AI companies manually (which is what we were doing) or join the Cloudflare initiative to charge or block AI crawlers (which wouldn’t allow us to accept payments for individual stories, only at the site-wide level).

To start, we’ve implemented the simplest version of this new RSL Standard, which prohibits AI companies from using your stories to train their AI models but allows them to summarize and link back to your writing in AI-generated search results. This is a direct extension of our current policy, which we launched in 2023.

Back then, we noted that there was an issue of basic fairness. AI companies were profiting off your writing and giving nothing back (other than an influx of AI-generated slop). So we put out a call for AI companies to offer consent, credit, and compensation. Behind the scenes, we tried to get major internet companies to work together. It’s wild to us that it took this long to get to a formal protocol. In our view, there was never a viable negotiating strategy unless we all band together. So we’re happy, finally, to support a formal, standardized way to for all content owners to tell AI companies the rights and restrictions on your content.

At Medium, we are trying to navigate this issue on your behalf and come to thoughtful default policies. Our original position in 2023 felt straightforward: Why would anyone participate in allowing AI companies access to their writing there was no value coming back to the community here? The situation has changed quite a bit and now there is some value coming back in the form of visitors to your writing and the potential for some financial compensation.

To that end, I want to talk to all of you about the current state of the credit that is coming with this standard (in the form of citations to and views on your writing), and what we think the upcoming options will be for financial compensation. If we can get clear on credit and compensation, then RSL gives us a way to define consent.

{{subscribe}}

Principles that support our mission, our writers, and our readers

One of the major use cases for AI companies right now is that they are acting as search engine replacements. Examples of this include: ChatGPT, Perplexity, Google Gemini. Originally, these AI search replacements were just offering up their own generated answers without giving any citations to the stories that they had trained on. But more recently, it’s become common that they both credit the source material (you) and drive clicks to that source material.

Medium is Humans First. We’ve found that leads to three principles that help us make decisions about artificial intelligence.

  1. Our mission to deepen understanding requires human stories that contain human thoughts, human emotion, and human experience.
  2. For our writers, we will protect the incentives that make it worthwhile for them to write and share their stories.
  3. For our readers, we will strive to give them agency to examine, influence, avoid, and override AI.
"We were drowning in data and manual processes. The risk of error was immense, and it was holding back our ability to deliver value to our clients. We needed a strategic leap, not just an incremental improvement."

How we apply these principles

So what we were asking AI companies for in terms of credit effectively means that we want them to help promote your writing and send you readers.

For example, earlier this year, ChatGPT was the fastest growing source of referral traffic for Medium writers. ChatGPT is about 1.3% as big of a source of readers as Google, and, importantly, those readers are 4x more likely to convert into Medium members.

This is an image caption

So far, OpenAI has also done the best job of giving content creators a lever for consent. In particular, they have separate rules for letting us block their agent for training on your writing while allowing it to include a credited citation to your writing in their search results. The RSL Standard will let us formalize those rules for all AI companies.

  • This might sound like splitting hairs if you aren’t up on the way that AI companies operate.
  • In the most simplistic view, there are often two parts.
  • The first part is the LLM, which is a giant statistical model of human writing representing trillions of data points, none of which can be cited back to source material.
  • In the ChatGPT case, we are blocking their ability to train their LLM on your writing because they are not offering any citation or compensation in return.

{{framework}}

The other part is a context window where the AI retrieves a smaller amount of content in response to your search and has the LLM analyze it. These context windows do allow for credit in the form of citations and are specifically why ChatGPT is now sending so many readers to Medium.

Given that people publish on Medium to get readers, we think the obvious default stance is to allow a product like ChatGPT to use your writing in their AI-generated search results if they are sending significant traffic in return. We are hoping that Google will catch up to OpenAI— so far, traffic from Gemini AI summaries is very poor.

Evaluation Criteria Result Result Result
Touchless Claims 40% 40% 40%
Touchless Claims after 6m 65% 65% 65%
Exception Handling Human in the loop Human in the loop Human in the loop
Expected Future Accuracy 97.2% 97.2% 97.2%
Ease of Use Trained by project team Trained by project team Trained by project team

The AI companies should pay compensation

Now to the second part, about payment. The RSL Standard includes two options for negotiating financial compensation from AI companies.

In the best case scenario, the RSL Standard will lead to a direct payment to you based on the way AI companies use your writing. This future possibility comes because the standard also defines a separate non-profit RSL Collective agency to negotiate, collect, and distribute compensation on behalf of the entire internet. Think of it as similar to ASCAP, the musical rights organization that negotiates with venues, broadcasters, and streaming services on behalf of musicians.

We lean toward internet standards and think the RSL Collective is probably the better approach for the overall health of the internet. But we also understand that the full adoption of this may be a ways off.

In the meantime, the RSL Standard also allows for a simple way to indicate to AI companies that we are willing to negotiate. This part of the standard is simply a listing for a contact form.

We’d like to do this, do it transparently, and do this with the goal of passing the compensation on to you. In our original position, we called this Negotiating as a Service. Some fights are too small for individuals. But we think this one is only medium difficulty for a company like Medium.

As with credit in the form of traffic, the RSL Standard is flexible enough to allow individual writers to opt out. So here again, a request: If you think you would opt out, can you say why? Leave a response, I’ll be reading them.

Wherever you personally land, I hope you’ll agree that the RSL Standard (and the RSL Collective) represents a much-needed and meaningful step forward in clarifying the relationship between writers and AI companies. We see it as a framework that lets us answers some fundamental questions, and provides a mechanism for giving writers the consent, credit, and compensation they deserve.

Conclusion

Our mission and our business model at Medium are the same: to deepen understanding.For writers to succeed on Medium, for readers to enjoy Medium stories, and for Medium to continue, this requires real stories — human stories, written from human experience and with human wisdom.So then what is the value of artificial intelligence, and what are the pitfalls to avoid? Is there a way to use AI to deepen understanding, or help writers tell theirhumanstories? Two years ago,we thought the value to writers and readers was less than zero.

The AI companies had leached value from your writing without offering consent, credit, or compensation. Then they enabled a wave of spam that tried to replace your writing with hallucinated slop.As we’ve continued working on making Medium the best place to read and write,we’ve noticed and heard from our readers and writers that some use of AI is starting to be useful. We also are finding uses in our own work. In our recent surveys, more than half of Medium readers are using AI tools in some capacity.AI is a tool that exists in the world. We want to make sure that we explore all possible options to make Medium better for readers and writers. Here’s how we’re starting to do so with AI, both in the principles that we take into consideration and how we’re currently using AI at Medium already.

AI ROI Framework Guide
Access comprehensive training programs and earn industry-recognized certifications.
Download
Article Author:
Haralds Gabrans Zukovs
VP of AI Strategy
We process over 600,000 emails same-day — achieving 80%+ straight-through processing and freeing 80% of staff time for higher-value work.