Looking just at the stock markets and headlines, the SaaS-pocalypse continued this week. On Tuesday, Anthropic released its latest model, Claude Sonnet 4.6, as a “full upgrade” of its skills across coding, computer use, long context reasoning, agent planning, knowledge work and design.

And for the second time in a few weeks, stocks of SaaS companies fell following Anthropic’s announcement. Oracle was down nearly 4.5%, Thomson Reuters lost about 5.4%, Salesforce dropped about 3.8% and Intuit cratered nearly 5.5%. It wasn’t as bad as the earlier routing that many SaaS makers’ stock prices saw after Anthropic announced its specialized Claude Cowork plugins meant for use in specific work roles—including legal, marketing, finance, customer support and bio research—but the selloffs weren’t good news for Wall Street’s faith in the future of SaaS.

But there’s a really big “however” here: Enterprises were already replacing SaaS, and that shift was underway before Anthropic’s developments this year. Enterprise app generation platform Retool found that 35% of companies have already replaced at least one SaaS tool with something they built themselves, and 78% plan to continue to replace more SaaS tools this year. About a third of companies are building software for workflow automation or internal administration tools. About three in 10 are working toward BI tools, while close to a quarter are building CRM or sales tools, form builders or project management tools.

Companies told Retool that they’re moving away from SaaS to save money in subscription fees, as well as to create platforms that are better customized to their needs. Vibe coding is playing a big part in this movement. Just over half of builders told Retool they’ve built at least one piece of software with AI. And most have used one of the top LLMs to do it—70% have used ChatGPT, 56% have used Google’s Gemini, and 53% have used Claude.

The study doesn’t spell the end for the SaaS giants just yet. AI-enabled solutions are still in their nascent stage, and 49% of those answering the survey haven’t been able to use them to build workable software for their company yet. Vibe coding is extremely helpful, but not error-free, and 22% of potential builders found that the code contained hallucinations or incorrect data structures. Not to mention six in 10 have built tools outside of IT oversight in the last year, marking a rocky path to adoption.

But clearly AI development is continuing, and to do that well, companies also need to make sure their data and data governance are in order. I talked to Andi Gutmans, vice president and general manager of Google Data Cloud, about how companies make sure their data is ready. An excerpt from our conversation is later in this newsletter.


This is the published version of Forbes’ CIO newsletter, which offers the latest news for chief innovation officers and other technology-focused leaders. Click here to get it delivered to your inbox every Thursday.



ARTIFICIAL INTELLIGENCE

Is AI a job killer? It’s not a simple question, and the answer isn’t simple either. There are lots of doomsday predictions out there, and they’re coming from people who should know. Anthropic cofounder and CEO Dario Amodei published an essay last month that said AI is getting so much smarter and more capable every day, about half of all of the world’s entry-level white collar jobs will be disrupted within five years, and there could be AI that is more capable than everyone in two years. Microsoft AI CEO Mustafa Suleyman told the Financial Times most white-collar tasks will be fully automated by AI by next year, former presidential candidate Andrew Yang said millions of office jobs will “evaporate” in the next two years, and billionaire investor Vinod Khosla predicted AI may end up eliminating the vast majority of today’s jobs—and that as many as 125 million Americans should be exempted from income tax.

But the problem with these doomsday predictions, writes Forbes senior contributor Joe McKendrick, is that they could become a self-fulfilling prophecy that is based on predictions—not actual performance. A recent study in Harvard Business Review found that most of the companies that have cut jobs because of AI are anticipating that the technology will displace workers, but very few entire job positions have actually become unnecessary. “The phenomenon of AI taking jobs and reducing hiring is somewhat artificial,” the study argues. “Company executives who make these moves may really believe that AI will eventually lead to large-scale automation, even though it hasn’t yet.”

So what should companies do? Other than continue to assess current needs—not the anticipated needs from thought leaders who aren’t a part of your business—companies can do better by conducting comparative experiments to see how well AI handles tasks compared to people. Stop cutting jobs based on predictions, and making what could feel like needed cuts by attrition. Use existing employees to look at ways to use AI to improve workflows—especially because they know what more they could do in their specific workflow that AI could not.

McKendrick also suggests that companies get creative about reassigning employees to new jobs in the enterprise. After all, technology has consistently changed the way work is done, but people are still needed to do important tasks. And in the wide view, AI still is creating jobs. McKendrick writes that tech companies are building out data centers and other infrastructure to power enterprise AI needs, and employment data shows that jobs in construction are growing.

TECHNOLOGY + INNOVATION

As AI technology gets better, the delicate balance of what should and should not be used changes. Last week, ByteDance released its new AI-powered video generator, Seedance 2.0. Forbes contributor Rob Schmelzer writes that the new version is a big step up from previous AI video generation. It allows for a variety of source files to create a video—text, image, audio and video—and is laser-focused on physical accuracy. Demos showed realistic-looking videos of things like a figure skating pair in a competition, a woman hanging laundry on a clothesline strung from a balcony, and a child walking into and interacting with the scenes in several famous paintings.

People have been awed by Seedance 2.0’s realism, and have spent the last week using it to make videos. A viral video of a fistfight between Brad Pitt and Tom Cruise has made the rounds online, and Forbes senior contributor Paul Tassi writes that disgruntled fans are making “better” endings to their favorite shows, including Game of Thrones and Stranger Things. And Hollywood en masse is threatening ByteDance with legal action, saying that the software’s ability to convincingly reuse IP from actors, characters and voices could amount to massive copyright infringement. Schmelzer writes that ByteDance has been “taking steps to strengthen current safeguards” against intellectual property theft, though what they’re actually doing is not known.

Regardless of how ByteDance reacts, these threats have the potential to change standards of how copyrighted material is used by AI, Schmelzer writes. Several lawsuits against AI companies about copyrighted written material and music are making their way through court, but in today’s society, short video content quickly moves to the forefront. Copyright law was not written with AI in mind, and companies using AI tools for video generation might find themselves unwittingly infringing on someone else’s IP.

Outside of the courtroom, new laws and regulations could help define the way these issues will be handled. However, it might take a lot to get federal government action to protect copyrights—especially because President Donald Trump’s administration has loudly opposed any AI regulations, and is known for posting AI-generated images and videos on official government social media accounts.

BITS + BYTES

Google Data Cloud VP On How To Make Progress On Your Data Journey

AI needs good data to be successful, but getting to that point can be challenging. Outmoded forms of organization, neglected metadata, governance that doesn’t meet the standards of what you need from generative AI and the new utility of unstructured data can cause problems. I talked to Andi Gutmans, vice president and general manager of Google Data Cloud, about how enterprises can get a handle on their data.

This conversation has been edited for length, clarity and continuity.

In terms of AI, there is so much that can be done, but it all comes down to data quality. All companies are different, but on average, what is the state of data quality in companies today?

Gutmans: I’ve yet to hear a customer tell me, “Hey, we’re in really great shape.” I think for most customers, this has been a journey.

In the previous period, when [companies only needed] data governance for specific parts of the data, we would have data stewards who would be able to curate some of the data. The difference with AI is: Now you want to activate all your data, not just the subset. You’re getting to the point where what worked in the past doesn’t work anymore because you can’t just curate the subset. Now you want to activate all the data, including unstructured.

Also, the level of metadata you need to start to infer has to be richer and much more intelligent to make sure you’re actually getting that grounding for the agents, to make them accurate and be able to reason. I think all of our customers pretty much feel like, “Hey, we’re in the beginning of this journey. How can you help?”

That’s where we’re looking to help them accelerate this journey with the right agentic experiences, where we’re using agents to catalog their data, make sure they get the right data quality, make sure we can infer business semantics, relationships with their data—everything they need to make sure that they can effectively activate AI.

So agents to beget agents, eventually?

It’s funny that you say that. I was talking to a friend of mine. He runs a different company in the AI space and he’s like, “I’m doing all my prompt engineering now by using an agent to write my prompts.”

I think we’re starting to see this kind of recursive experience now. We can use agents to make your data better so you can actually build better agents.

How do companies need to look at data governance and make changes in order to prevent their AI from hallucinating and ensure that they are getting the best output?

On the preventing hallucination side, customers have to invest a lot more in evals. This is the next way of testing, right? In the past, we had software testing, and now it’s agent and AI testing. How are you going to evaluate that your agent is accurate, that it’s doing the right thing? They need to put a ton more effort in that part, and that is new for a lot of customers. That also ensures that as your data changes, as the foundation model changes, then you don’t suddenly introduce some blip.

That is an area where customers really are building out teams and competency. You didn’t have eval teams two, three years ago in these organizations. That’s not as much data governance, but it is making sure that the outcome is accurate.

From the data governance perspective, you’re moving from this manually curated data governance to this agentic curated data governance—which on one hand has a lot of upside. You are able to automate a lot of this. On the flip side, you don’t always have a human in the loop.

This is why these evals, making sure that you have clarity on what high quality is, and measuring it is going to be always important—because these systems are changing all the time.

How can a CIO make sure they continue to be good stewards of the data that they have and the next time there’s something new in technology, not need to do a lot of work on data?

Now is the opportunity, right? This was too hard over the years because it was a very manual effort. A very big piece is making sure that you’re using a data platform that makes it easy for you to both deliver what you need today, but then also one that you can bet on that’s going to evolve for what you need in the future.

COMINGS + GOINGS

  • Hiring platform Indeed appointed Jim Giles as chief technology officer, effective February 16. Giles most recently worked as vice president of engineering at Google, and held multiple senior positions there, in addition to IBM.
  • Fast-casual restaurant franchise Freddy’s Frozen Custard & Steakburgers selected Todd Paladini to be its first chief information officer. Paladini most recently worked at Cafe Rio Mexican Grill as its lead for IT, and with Cinemark prior to that.
  • AI cloud service provider IREN hired John Gross as chief innovation officer. Gross is also vice chair of committees within ASHRAE, and he fills a newly established role at the company.

STRATEGIES + ADVICE

In a time with constant change, the best leaders need to be innovative, creative, thoughtful—and they should also be making rules. Leading with curiosity is the way to create the future, but in order to stay on target, you need to build with intention.

Meetings are a necessary part of work, and sometimes they need to be uncomfortable to be useful. Instead of being a performative display of how much different teams have done, plan your meetings so they’re true opportunities to discuss issues and figure out solutions.

QUIZ

A judge overseeing a lawsuit, in which Meta is accused of designing Instagram and Facebook to be addictive to children, threatened to hold participants in contempt for which of the following actions?

A. Liveblogging the trial on a social platform

B. Postings on social media tagging witnesses who have asked to be anonymous

C. Using AI to figure out home addresses of jurors

D. Wearing Meta glasses to record the trial

See if you got the answer right here.