When AI Cannibalizes Its Creators: The Future of Data Science Blogs in the Age of LLMs

A Paradox for the Future of AI

George Pipis
3 min readSep 12, 2024
Photo by Patrick McManaman on Unsplash

In recent years, I had a thriving data science blog where I shared code snippets, tutorials, and practical tips for data scientists, engineers, and machine learning enthusiasts. It was a space where we discussed the latest trends, exchanged knowledge, and solved real-world problems using Python, R, and the ever-growing toolkit of data science. But then, something changed.

As Large Language Models (LLMs) like ChatGPT rose to prominence, my blog’s traffic plummeted. What was once a go-to resource for readers had become obsolete. The very audience that used to frequent my posts now turned to these sophisticated AI models for answers. The ease of asking an LLM a question, and receiving a concise, tailored response, rendered blog articles — often filled with longer explanations, examples, and background information — less relevant.

The Self-Cannibalizing Nature of LLMs

Ironically, the LLMs that have largely displaced my blog’s content are built on the very same data that blogs like mine helped create. My posts, along with countless others, provided the foundation for these models during their training phases. Now, LLMs have gotten so good at parsing…

--

--

George Pipis
George Pipis

Written by George Pipis

Sr. Director, Data Scientist @ Persado | Co-founder of the Data Science blog: https://predictivehacks.com/

Responses (2)