For me, the journey to externalized data storage began in HyperCard, the moment a client wanted to make a second version of his product.
In the first version we did what HyperCard uniquely encourages developers to do: just leave it in the UI. Seemed simple enough.
But V2 incorporated a great many changes to the UI - where the data was stored. So replacing the UI meant replacing the data.
So we did what HC folk did in those days, at least the subset of HC devs with products successful enough to merit a major upgrade:
We encumbered the user with two extra steps. Before upgrading they were required to export the data, and then after upgrading they were required to re-import it to restore it where they'd left off.
As my interest in development grew, I found I wasn't alone. Experts had been writing about software development for many years. I read them, as many as I could make time for.
I found a common theme among many professional developers: a desire to maintain a separation of concerns. Code, UI, and data can be managed separately, and when done well it means changes to any one of them has minimal impact on the other two.
Consider the humble word processor: you can change the app, which may change the entire presentation of the data in amazing new ways, and the data just goes along for the ride smoothly.
Whether the separation of the application from user data is explicitly part of the workflow (as with a word processor) or automated (as with an iPhoto index), that separation is always present, allowing one to be changed without affecting the other.
Similarly, separating core business logic from an app allows it to be separate from the UI.
I once wrote an app with an embedded search engine. Though initially delivered on CD-ROM, eventually it became a web product. And because the core logic was written with an awareness of separation of concerns, we just picked up the search engine part, moved it to the server, and added an HTML wrapper for the output. The rest of it just went along for the ride.
Code, UI, and data: wherever practical, separation yields flexibility for maintenance and enhancement.
--
Later in my journey I encountered data sets large enough to impose memory constraints. This was in the Mac Classic days, before macOS have us NeXT's UNIX underpinnings, with all the superior memory management UNIX offers. "Out of Memory" issues were a thing in those days.
We had to separate the data because even though the addressing within the engine allowed to to 4GBs in a field or other container, with everything else going on in the machine we simply didn't have 4GBs available.
So I began reading what I've come to learn is what most CS literature is historically about: the tradeoffs between disk and RAM, and the smoothest ways to move data between the two.
External data can use as much space as storage allows. RAM is for the subset we care about in the moment.
Your can have any number of word processor documents on disk, but right now the one that matters is the novel you're writing at the cafe.
--
Then we needed to find stuff, to query collections to get the subset we're interested in. And that brought me to indexes.
Indexing is a vast topic in itself, but for here consider the speed difference between iterating through an entire collection of addresses to the those with a given zip code, and having an index that's already sorted by zip code so you can get all those records in just one step.
Indexing is a fascinating world of reading, and I can't recommend it strongly enough for geeks who enjoy discovering inventive solutions.
And it's not just performance, though it's hard to beat complied object code purpose-built for a task. It's also flexibility, for finding, extracting, and presenting the found set.
A good query language lets you specify what criteria you records to meet, along with which fields you want to display once they're found.
Sure, we can write these things in xTalk, and I have (wrote a nifty lib years ago for working with tab-delimited files once, fun to do and useful for what I was doing).
But with a database engine you don't need to write that. They've already done it. And they've done it in highly optimized complied object code, so it's much faster than anything any scripting language can do. And by using a popular DB engine you get the work of many hundreds of specialists, so the code is generally more robust than anything a single individual would be able to design, test, and debug working alone.
--
And then the Internet happened.
Before that, applications were generally designed for a single user to run on a single computer.
The introduction of laptops introduced something we'd never had to think about before: keeping data in synch between multiple computers.
And with networking the opportunity was even bigger: collaboration between multiple users.
Those who'd already cultivated habits centered around maintaining a separation of concerns found that it makes relatively little difference to the application whether it pulls data from a local storage device or a remote server. Once it's loaded it's all the same.
But oh what a difference it makes for workflows.
I started this reply on my phone, and when it got verbose (pardon the length; you know how I can get when telling stories) I finished it on my workstation. I don't even know exactly where this text lived while I was working on it, where the server is physically located. I don't need to. All I know is I can work on it anywhere, from any computing device I happen to have in my hands at the moment.
Separation of concerns has many, many benefits.
--
All that said...
...deep at the heart of every Linux system is a database so critical that if munged there's a good chance your machine won't boot. And though we may hold in our hands computers with other OSes, we use them to work on remote machines which are usually Linux (iCloud is a Linux farm, for just one example, and these forums are run on Linux as another), so this bit of trivia affects all of us even if we don't identify as "Linux users":
fstab is the File System Table, listing storage devices and mount points.
And it's a simple space-dimilited text file.
Sometimes even the most critical data can benefit from simplicity.
So for all the talk about size, performance, and complexity, there remains a solid and rather wide range of use cases favoring simple data storage.
Flat files have a place. Collections of text files drive many powerful CMSes, and delimited text is a wonderful option for things that lets lend themselves to memory-bound work that fits well with a row-and-column format.
Even better: you can open text files with a wide range of programs. If you've been in the biz long enough to see a favorite app with a proprietary format reach end of life, data longevity becomes important. And even before then, flexibility with data editing is rarely a bad thing.
Data need not be fancy to even enjoy the benefits of the cloud: file synching like Dropbox, Google Drive, iCloud, Nextcloud, and others can allow you to work anywhere without having to craft a custom synch solution yourself.
And for simple things for personal use, there's little penalty for storing data in the same stack file where it's displayed. The convenience is hard to beat. I do it all the time.
But for professional works delivered to others, I just do what the rest of the world does: maintain a separation of concerns that supports maintenance and enhancement.
I've been bitten by choosing otherwise, and have enjoyed many benefits from following the guidance of professionals on this.