How SQLite is tested
sqlite.org304 points by whatisabcdefgh 20 hours ago
304 points by whatisabcdefgh 20 hours ago
Over a decade ago, the maintainer of SQLite gave a talk at OSCON about their testing practices. One concept that stood out to me was the power of checklists, the same tool pilots rely on before every flight.
He also mentioned Doctors Without Borders, who weren't seeing the outcomes they expected when it came to saving lives. One surprising reason? The medical teams often didn't speak the same language or even know each other's names.
The solution was simple: a pre-surgery checklist. Before any procedure, team members would state their name and role. This small ritual dramatically improved their success rates, not through better technique, but through better communication.
On the other hand, I think this narrative also causes a lot of useless red tape. There might be some survivorship bias here.
Aviation, Doctors Without Borders, and SQLite have good checklists. Checklists are simple, so it's easy to think "oh I could do that too". But you never hear about the probably endless companies and organizations that employ worthless checklists that do nothing but waste people's time.
I wish there was more talk about what makes a checklist good or bad. I suspect it's kind of like mathematics where the good formulas look very simple but are very hard to discover without prior knowledge.
I make checklists for myself and they're enormously helpful. Because my brain can't always remember every single little detail of every complex task every single time.
I've also seen checklists made by morons that are enormously unhelpful.
IMO it's paramount for whoever is making the checklist to have familiarity with the task at hand (both how to do it properly, and what steps people tend to miss or get wrong), investment (is this tool something you'd find indispensable for yourself if you were placed in the role of executing it?), a sense of pragmatism and conciseness.
The ability to recognize what things will be obvious or flow naturally from A to B helps eliminate redundant fluff. e.g. I train volunteer firefighters and in most canonical steps for calling a Mayday, one is basically "Tell the person on the other end what's wrong". You don't need a checklist item for that. When something goes seriously sideways and you need help, you will be very inclined to convey what is the matter.
> But you never hear about the probably endless companies and organizations that employ worthless checklists that do nothing but waste people's time.
Most if not all the bad checklist I have encountered are all for the same reason, they were not tested or poorly written, and most of the time both.
Not tested in terms the checklist was written by somebody who doesn't actually know how to do the whole project. Unlike Professionals like Doctor ands Pilot where they are well trained and the check list are well understood to be a reminder. The rational behind it were taught and even if not professionals will question if something they dont understands while most other in there field could immediately give a detail answer.
Another example would be HR writing an on-boarding checklist. 99% of the time I have seen those check list are intended to make HR's life easier. Not the candidate or applicants.
Checklist is also a clear and distilled form of writing. And as the saying goes I dont have time to write you a short letter, but I have time for a long one. Writing short points with clarity takes a long time. And not a skill set everyone process. Nor do they have the time to do it when it is not part of their job or KPI.
> Most if not all the bad checklist I have encountered are all for the same reason, they were not tested or poorly written, and most of the time both.
I'd argue this comes back to "written by people who do not have to follow them on a regular basis".
WHO and Gawande emphasize iteration, that the draft is always wrong. They also claim good checklists are really coordination tools disguised as task lists.
I've always found an enormous amount of good practices (not just engineering ones) in aircraft operations and engineering that would be applicable to software engineering.
I've always day dreamed of an IT organization that combined those with the decision-making procedures and leadership of modern armies, such as the US Army one.
I've re-read multiple times FM22-100 which I find strikingly modern and inspiring:
https://armyoe.com/wp-content/uploads/2018/03/1990-fm-22-100...
While I do understand that business leadership cannot compare to the high standards of those required by way more important stakes, I think there's many lessons to learn there too.
It's all about balance, in the end. If you do too much of one thing your business will fail. If you don't do enough of another, your business will fail too. And they're the same thing...
The trick then is to do just enough of everything to avoid disaster and to move as fast as you can to get to a realm where you can actually afford to do it right. Most start-ups initially cut corners like it is crunch time at the circle factory, which then usually catches up with them at some point either killing them or forcing them to adapt a different pace. Knowing exactly when to put more or less attention on some factor is the recipe for success but nobody has managed to execute that recipe twice in a row without finding things that no longer work, so it remains a dynamic affair rather than one you can ritualize.
And that's where checklists shine: repeated processes that are well defined and where change is slow enough that the checklists become 'mostly static', they still change but the bulk of the knowledge condensed in them stays valid over multiple applications.
I guess you are not the only one. Here is talk by Andrew Godwin about using aviation practices for software engineering:
In some areas, I absolutely agree... I think when it comes to vehicles, medical devices and heavy equipment, it would be better to see much more rigorous practices in terms of software craftsmanship. I think it should be very similar in financial operations (it isn't) and most govt work in general (it isn't).
In the end, for most scenarios, break fast, fix fast is likely a more practical and cost effective approach.
Checklists that I use in personal life: - Office packing list. A “do-check” checklist that takes 20s to run through right before leaving home - Checklists for multi-day business and leisure trips - Home maintenance checklist for filters, drains and other things that require regular maintenance.
The thing that drives me absolutely mental about most developers I’ve worked with is just how much work they’ll do to avoid the easy thing, if the easy thing isn’t programmatic.
I have tests and CI and all that, sure. But I also have a deployment checklist in a markdown document that I walk through. I don’t preserve results or create a paper trail. I just walk the steps one by one. It’s just so little work that I really don’t get why I cannot convince anyone else to try.
Manual checklists are often the best option for repeated tasks that can't be automated sufficiently reliably and sufficiently economically. But if they can be, then manual checklists are unnecessarily inefficent and/or unreliable. And the more frequently repeated the task is (ceteris paribus), the more up-front energy is justified in automating it. That said, to automate a process, you have to understand it enough to generate a checklist as a prerequisite (and, sure, you can develop that understanding in the course of automation, but doing so first will also go a long way to informing you if automation is likely to be worthwhile.)
That said, and without prejudice to SQLite’s use of checklists which I haven’t deeply considered, while the conditions that make checklists the best choice are definitely present in aviation and surgery in obvious ways, processes around software tend to lend themselves to efficient and reliable automation, with non-transitory reliance on checklists very often a process smell that, while not necessarily wrong, merits skepticism and inquiry.
Yeah, checklists are great. Further, they're even precursors to automation.
I highly recommend The Checklist Manifesto [1] for a excellent guide on how to construct good checklists.
While I certainly found it insightful, I felt like this book (like so many in the genre) was a pamphlet's worth of material inflated to fill about 250 pages.
Thought this would be a blog, disappointed to see it's a whole book. My suspicion is there is 5 good pages of material stretched out 20-50x.
It's true that you can boil it down a lot. In fact, the book even has a checklist checklist that distills down the advice to one page. However it was overall a very quick read and the extra discussion really did further my understanding of the underlying principles that make a checklist good. I'd recommend reading the whole thing so that you actually make a useful checklist instead of a cargo-cult copy of an aviation checklist.
I've become pretty enthusiastic about checklists.
In case it helps anyone else, I wrote a small tool that helps maintain and execute them:
https://github.com/amboar/checklists/
It's just a shell script wrapped around $EDITOR and git. The intent is to write checklists in the style of github-flavoured markdown. It has some tricks such as envsubst(1) interpolation and embedding little scripts whose execution is captured alongside the execution itself.
Here's an example checklist that's fairly well-worn (though is somewhat dated):
https://gist.githubusercontent.com/amboar/f85449aad09ba22219...
Where checklist entries are commands I just copy/paste them into a shell session. Usually this is in a tmux split with the checklist on one side with a shell on the other.
Could more of it be automated? Perhaps, though when some of the steps fail for various reasons, I find it's easier to recover or repair the situation if I have been invoking them myself sequentially. The embedded script support enables progressive automation where stability of the results is demonstrated over time.
That is really insightful regarding the ritual improving outcomes through better communication - something I see reflected in many meetings I turn up to now which involve an introduction round between participants, and anecdotally improves participation in the meeting.
It would be amazing if someone had a link to a page with the MSF story, as that is a great reference to have! My google-fu hasn’t helped me in this case.
For prosterity, the original report on the pilot program with the checklist including introducing names of participants (doi 10.1056/NEJMsa0810119).
Possibly popularised by Atul Gawande “The Checklist Manifesto”.
Meta-comment: LLMs continue to impress me in the capabilities with unearthing information from imprecise inputs/queries.
Likely the idea came from Atul Gawande, who is a literate surgeon who has done work popularising the idea.
A lot of these seem to be potentially automated - why aren’t they? Anyone know?
The stupid answer is that not everything that can be automated should be.
The real answer is of a more philosophical nature, if you manually had to check A, B, C... Z, then you will have a better understanding of the state of the system you work with . If something goes wrong, at least the bits you checked can be disregarded and free you to check other factors. What if your systems correctly report a faulty issue, yet your automatic checklist doesn't catch it?
Also, this manual checklist checks the operator.
You should be automating everything you can, but much care should be put into figuring out if you can actually automate a particular thing.
Automate away the process to deploy a new version of hn, what's the worst that can happen?
But don't automate the pre flight checklist, if something goes wrong while the plane is in the air, people are going to die.
I think a less verbose version of the above is that a human can detect a fault in a sensor, while a sensor can't detect it is faulty itself.
I'm not a pilot, but my brother is, and I watched him a bunch of times go through these before takeoff and landing. I think it's about more than automation, these days the aircraft computer "walks" the pilots through the checklists but it's still their responsibility to verify each item. I think it's an interesting approach to automation, keeping humans in the loop and actually promoting responsibility and accountability, as in "who checked off on these?"
Most/all of those are individually automated.
Someone checks that they ran successfully, and vouches for it.
Automating the automation can be counter productive.
Like the release process is triggered automatically by a tag, then fails after an hour long sequence of complex steps, which forces you to re-tag, but by then your tag is out there.
Or, simply, it's a bad idea to run the entire process from scratch, but you automated it such that it's easiest, so you fix something about it and the only way to test the release process itself is to release, and you now need half a dozen releases to get it right.
Related. Others?
How SQLite Is Tested - https://news.ycombinator.com/item?id=38963383 - Jan 2024 (1 comment)
How SQLite Is Tested - https://news.ycombinator.com/item?id=29460240 - Dec 2021 (47 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=11936435 - June 2016 (57 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=9095836 - Feb 2015 (17 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=6815321 - Nov 2013 (37 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4799878 - Nov 2012 (6 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4616548 - Oct 2012 (40 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=633151 - May 2009 (28 comments)
(Reposts are fine after a year or so; links to past threads are just to satisfy extra-curious readers)
Always makes me a bit envious as well as awestruck. What a joy it must be in a lot of ways to be able to grind and perfect a piece of software like this. Truly a work of craftsmanship.
You can literally just do this. I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
Over a career of working at it, you get dramatically better at higher levels of quality even in earlier passes, so the same level of added effort provides increasing levels of reward as you gain experience.
Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
> I’ve never gotten fired from a software engineering job for moving slower and building things that work well, work predictably, and are built to last.
In most companies, that’s not how it plays out. Once something works, you’re immediately moved on to the next task. If you’ve had the time and space to refine, polish, and carefully craft your code, you’ve been fortunate.
The person who signals that some task works and is finished is you. You have way more control over this than you are giving yourself credit for.
If you spend your career acquiescing to every request to “just ship it” then, yes, slowing down a second to do a quality pass will seem impossible. But you really can just do it.
> The person who signals that some task works and is finished is you.
That's not how it works in most big companies. You can't take arbitrarily long to finish a project. Before the project is greenlit you have to give an estimate for how long the project will take. If your estimate is too big or seems unreasonable the project dies then and there (or given to someone else). Once the project starts you're held to the estimate, and if you're taking noticeably longer than your estimate you better have a good explanation.
Nobody is saying take three years to complete a week-long task. If you could do a task in one hour, estimate and take two. If you could do it in two days, estimate and take a third day to complete it. Or better yet, estimate three days and take two and a half.
I have never seen a software development shop where estimates were treated as anything other than loose, best guesses. Very infrequently are there actually ever genuinely immutable, hard deadlines. If you are working somewhere where that's repeatedly the case—and those deadlines are regularly unrealistically tight—failure is virtually inevitable no matter what you do. So sure, fine, if you're on a death march my suggestions won't work. But in that kind of environment nothing will.
> Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
This should be true, but it's not in my experience. Even small, clear improvements are rejected as off-mission or shunted to a backlog to be forgotten. Like, "cool, but let's hold off on merging this until we can be certain it's safe", or in other words "this is more work for me and I'd really rather not".
Do it as part of ticket work you're already doing. There is always a way to leave things better than how you found them.
I have worked across a wide gamut of roles (full-stack eng, infosec, deploy infra, devops, infra eng, sysadmin), companies (big and small, startups and huge multibillion-dollar players), and industries (finance, datacenters, security products, gaming, logistics, manufacturing, AI) over a thirty year career and I have never felt the level of helplessness that people seem to be claiming. Some places have been easier, some have been harder, but I have never once found it to be as difficult or impossible as everyone laments is the case.
> There is always a way to leave things better than how you found them.
I agree, and that's something I do even if I get push back. I think it's essential in all aspects of life, or things typically get worse. People have to care.
My point is more so that it often—in my experience, at least—is met with friction because a lot of people see this kind of housekeeping as bad for bottom lines, or in the weeds, as a distraction, or what have you. I've encountered friction and push back more than acceptance let alone appreciation, I think.
A very common (and sometimes fair) form of push back is along the lines of "let's keep this ticket ONLY to the bug fix/feature/whatever and avoid any unnecessary changes". This is generally good practice, but I'll personally allow unrelated changes if they're relatively simple chores to leave things better than they were found.
If I were to summarize my experience, it's that caring to make things better or as good as they should be typically requires my own time and energy outside of regular work hours. That's a hard sell, especially after 20 years or so. I still do my best to make things better and I've come to expect and meet that friction with the necessary energy to overcome it, but... I mean, it's not always easy or necessarily rewarding. Many of my jobs could have often been dramatically easier if I cared less, but I'm not sure anyone would have even minded that.
That's really great and I'm happy for you but your experience is not universal.
My point isn’t that my experience is universal, but that I find it statistically unlikely that this is nearly as hard as people are making it out to be at the majority of SWE roles.
If you find yourself repeatedly working at places where the only option is to crank out garbage at breakneck pace, I don’t know what to tell you. If you stipulate as an axiom that it's impossible to write quality software at ${JOB}, then you're right, by definition there's nothing to be done. I just don't find that a particularly helpful mindset.
You can definitely go overboard for work. If you want to do it as a hobby, go nuts, but there isn't a point in overengineering far beyond what is needed (recall the Juicero)
Overengineering is building a bridge that will stand 1000 years when 100 will do; it's excess rigor for marginal benefit. Juicero wasn't overengineering, it was building a crappy bridge to nowhere with a bunch of gaudy bells and whistles to try and hide its uselessness and poor design, that collapsed with the first people to walk over it
Have you looked at a Juicero teardown [0]? It's overengineered to the point where it's a genuinely astonishing bit of engineering art. It's also an incredibly stupid product. Those things are completely compatible.
Idk, it looks like most of what this person is complaining is that they don't see a lot of this in high volume consumer products. But like, most high volume comsumer products don't have to crank nearly the same amount of torque either.
It's a silly product, but as far as being over engineered, it looks like it's about what I'd expect for those requirements.
Have you ever changed a tyre on a car?
If so, you may have noticed the jack you used didn't have several huge CNC machined aluminium parts, a seven-stage all-metal geartrain, or a 330v power supply and it probably didn't cost you $700. Probably it cost more like $40.
And sure, a consumer kitchen product needs to look presentable and you don't want trapping points for curious little fingers. But even given that, you could deliver a product that worked just as well for just as long at a far lower BOM cost.
Something is overengineered for the actual problem even if it's necessary to meet the requirements, if the requirements are themselves unnecessary. Imagine speccing a 100m span to cross a small stream. The resulting bridge can reasonably be called overengineered.
You can achieve the same goal (getting juice from diced fruit without cleanup) much easier with different requirements. The post mentions that.
The pendulum has swung so far in the direction opposite of going overboard it’s almost laughable. Everyone retells the same twenty year old horror stories of architecture astronauts, but over a nearly thirty-year career I have seen precisely zero projects that failed due to engineers over-engineering, over-architecting, and over-refactoring.
I have however seen dozens of projects where productivity grinds to a halt due to the ever-increasing effort of even minor changes due to a culture of repeatedly shipping the first thing that vaguely seems to work.
The entire zeitgeist of software development these days is “move fast and break things”.
Same. But it took learning to ignore everything every manager was telling me: Go faster, ship before I'm ready to ship, proceed without agreed-on and documented requirements, listen to product instead of customers, follow code conventions set by others...
I love sqlite, it's a great piece of software. The website is full of useful information, rather than the slick marketing we are used to, even on open source projects.
With that said, I find it strange how the official website seems to be making its way through the HN front page piecemeal.
This one is probably popping today because of the simonw post yesterday about using an LLM to basically one-shot port a lib across languages with the help of an extremely robust test suite
If you wait here long enough, it happens again, and again, and again, and again...to the point you start wanting to skewer it. :)
EDIT: Haskell was early 2010s Zig, and Zig is in the traditional ~quarter-long downcycle, after the last buzzkill review post re: all the basic stuff it's missing, ex. a working language server protocol implementation. I predict it'll be back in February. I need to make a list of this sort of link, just for fun.
> The TH3 test harness is a set of proprietary tests [...]
> The dbsqlfuzz engine is a proprietary fuzz tester.
It's interesting that an open-source (actually public domain) software uses some proprietary tests. It never occurred to me that this was a possibility, though in retrospective it's obviously possible as long as the tests are not part of the release.
Could this be an alternative business model for "almost-open-source" projects? Similar to open-core, but in this case the project would easy to copy (open features), hard to modify (closed tests).
Many times these test suites are more valuable than code itself, particularly in legacy software. Trying to find and document thousands of edge cases a software like Excel must have is more difficult than implementing them.
> It never occurred to me that this was a possibility
Yes, it's viable. I do it for my companies projects in addition to dual-licensing under the GPL. See "The unit tests and Unicode data generators are not public. Access to them is granted exclusively to commercial licensees." [1].
[1] https://github.com/railgunlabs/unicorn?tab=readme-ov-file#li...
No less impressive than the SQLite project itself; especially 100% branch coverage! That's really hard to pull off and especially to maintain as the development continues.
This looks so very cool, and so then all the more thought provoking that the tests themselves are closed-source, unlike the rest of the codebase. In this evolving world of rapidly improving llm coding agent productivity, the idea that the tests are more important than the implementation starts to ring true.
I was thinking about sqlite's test landscape as described here, in relation to simonw's recent writing about porting/recreating the justHTML engine from python to js via codex, nearly "automatically" with just a prompt and light steering.
It's made a lot of sense in general if you think about the business models around open source products. An extensive test suite gives you the ability to engineer changes effectively and efficiently, meaning that you can also add additional value on top of released versions better than everyone else.
SQLite's HowSQLiteIsTested reads like a bible of testing. I've know few who score merely highly by comparison
I was pleasantly surprised recently when planning to "upgrade" a light web app to be portable between SQLite and DuckDB, and the LLM I was working with really made the case that SQLite is better if concurrent operations were to occur.
I am surprised to see that there isn't a lot of information about performance regression testing.
Correctness testing is important but the way SQLLite is used, potential performance drops in specific code paths or specific type of queries could be really bad for apps that use it in critical paths.
While I’ve worked in HFT and understand the sentiment, I can’t recall any open-source project I’ve used coming out with performance guarantees. Most use license language setting no guarantee or warranty. Are there notable projects that do include this consideration as their core mission?
I believe every sensible open-source developer strives to keep their software performant. To me, a performance regression is a bug like any other and I got and fix it. Sure, there's no warranty guaranteed in the license, yet no-one who takes their project even a little seriously takes it as "I can break this any way I want".
Based on the stability track record, I was more curious about how SQLite has done the anomaly testing. Sadly, the article has just a few words about it.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
Considering that the support tier where you get access to the testing suite is 150K/year, I don't think they will be spilling any beans soon.
I love SQLite's quality and their docs explaining this kind of things. However, not all parts of SQLite have the same level of quality. I was very disappointed when I found bugs related to its JSON functions (and several other similar bugs related to other features):
SQLite supports a set of JSON functions that let you query and index JSON columns directly, which looks very convenient—but be careful:
1. `json('{"a/b": 1}') != json('{"a\/b": 1}')`
Although the two objects are identical in terms of JSON semantics, SQLite treats them as different.
2. `json_extract('{"a\/b": 1}', '$.a/b') is null`, `json_extract('{"\u0031":1}', '$.1') is null`, `json_extract('{"\u6211":1}', '$.我') is null`
This issue only exists in older versions of SQLite; the latest versions have fixed it.
In many cases you can't control how your JSON library escapes characters. For example, `/` doesn’t need to be escaped, but some libraries will escape it as `\/`. So this is a rather nasty pitfall, you can end up failing to match keys during extraction with seemingly no reason.
What is the story with Fossil? Is it used outside of Sqlite?
The story of Fossil:
Something better than CVS was needed. (I'm not being critical of CVS. I had to use the VCSes that can before, and CVS was amazing compared to them.) Monochrome gave me the idea of doing a distributed VCS and storing content in SQLite, but Monochrome didn't support sync over HTTP, which I definitely wanted. Git had just appeared, and was really bad back in those early years. (It still isn't great, IMO, though people who have never used anything other than Git are quick to dispute that claim.) Mercurial was... Mercurial. So I decided to write my own DVCS.
This turned out to be a good thing, though not in the way I expected. Since Fossil is built on top of SQLite, Fossil became a test platform for SQLite. Furthermore, when I work on Fossil, I see SQLite from the point of view of an application developer using SQLite, rather than in my usual role of a developer of SQLite. That change in perspective has helps me to make SQLite better. Being the primary developer of the DVCS for SQLite in addition to SQLite itself also give me the freedom to adapt the DVCS to the specific needs of the SQLite project, which I have done on many occasions. People make fun of me for writing my own DVCS for SQLite, but in the balance it was a good move.
Note that Fossil is like Git in that it stores check-ins an a directed acyclic graph (DAG), though the details of each node are different. The key difference is that Fossil stores the DAG in a relational database (SQLite) whereas Git uses a custom "packfile" key/value store. Since the content is in a relational database, it is really easy to add features like tickets, and wiki, and a forum, and chat - you've got an RDBMS sitting there, so why not use it? Even without those bonus features, you also have the benefit of being about to query the DAG using SQL to get useful information that is difficult to obtain from Git. "Detached heads" are not possible in Fossil, for example. Tags are not limited by filesystem filename restrictions. You can tag multiple check-ins with the same tag (ex: all releases are tagged "release".) If you reference an older check-in in the check-in comment of a newer check-in, then go back and look at the older check-in (perhaps you bisected there), it will give a forward reference to the newer one. And so forth.
This is wild, thank you for answering!
About Fossil, I really liked how everything is integrated into the VCS.
My friends also make fun of me having some tools that only I use. Somehow understanding a tool down to the last little bit of detail is satisfying by itself. We live in an era of software bloat that does not make too much sense.
Anyways, thanks for SQLite, I use it for teaching SQL for students and for my custom small scale monitoring system.
I love Fossil, I love SQLite, and I also like Althttpd.
https://sqlite.org/althttpd/doc/trunk/althttpd.md
Just like Fossil vs Git, SQLite vs $SomeRealSQLServer, I wish someday Althttpd would become a no-bullshit self-contained replacement for Nginx/Apache/whatever bloated HTTP servers. It has already proved its working by serving Fossil/SQLite, but configuration/features for serving actual web site is not yet "real production quality", at least that is how I feel.
Overall, what an amazing legacy this set of software has been to the world.
SQLite & Fossil* were created by same person (once a member of Tcl Core Team). Fossil few years after SQLite (was on CVS before). A rationale is given in: https://sqlite.org/whynotgit.html. The one other big project using it is Tcl/Tk. (Can say Tcl x SQLite x Fossil form a trinity of sorts with one using the others.)
*The homepage is available in: https://fossil-scm.org/home/doc/trunk/www/index.wiki.
> Is it used outside of Sqlite?
Not really. It's one of the early _distributed_ version control systems, released a little after git but before git gained widespread acceptance.
It has a built-in (optional) web UI, which is cool, and uses SQLite to store its state/history.
I can't answer that but it's a great thing that an entire Fossil repo lives in a single sqlite file.
They need to do better testing to stop the whole database file getting corrupted, which happened a ton to me with SQLite.
I've never had SQLite corrupt a database file, and given how widely it's used literally everywhere without reports of corruption, and the incredibly extensive testing methodology they use to ensure that, your issues seem very unlikely to have been SQLite's fault.
To be fair, there are numerous ways to misuse it. Depending on how and where you are using SQLite, you have to know things about WAL and syncing etc.
Interesting, TH3 is proprietary.
Perhaps someone in the know can answer this: How reliable is SQLite at retaining data integrity and avoiding data corruption, compared to say, flat text files?
If you were to use SQLite within conventional means the answer is very. SQLite does provide repair tools for when such mishap occurs.
... very thoroughly is the answer
What a superb piece of software SQLite is.
Install and forget.
[dead]