Show HN: A Better Log Service
txtlog.net147 points by williebeek 2 days ago
147 points by williebeek 2 days ago
Hello everyone, there are many log services available and this is my attempt at a better one.
Most online logging tools feature convoluted UIs, arbitrary mandatory fields, questionable AI/insights, complex pricing, etc. I hope my application fixes most of these issues. It also has some nice features, such as automatic Geo IP checks and public dashboards.
Although I've created lots of software, this is my first open source application (MIT license), the tutorial for selfhosting is hopefully sufficient! Most of my development career has been with C#, NodeJS and PHP. For this project I've used PHP (8.3) which is an absolute joy to work with. The architecture is very scalable, but I've only tested up to a few billion logs. The current version is used in production for a few months now. Hope you enjoy/fork it as you see fit!
It's a minor thing but I would remove the jQuery dependency. You're not doing much with that plain javascript couldn't do just as well if not better. Plain JS has come a long way since jQuery first came out. > there are many log services available and this is my attempt at a better one. Out of curiosity, can you describe how your service is better than others? >I hope my application fixes most of these issues Do you care to elaborate on the "how"? I’m curious about the open source nature of this and how you / people in general manage a project where you are hosting it and need to maintain its security, but are also presumably merging pull requests as people contribute to the project. I would be quite paranoid about this, ie concerned that someone might slip a line of code in with the intent of breaching the service that I would not catch during code review. I know this is true of any open source project but it feels especially fraught when you are also hosting it and letting people sign up and pay for it. I’m wondering if you or others have experience with this and what approaches and practices mitigate this risk. Just because a project is “open source” doesn’t actually mean you must accept or even merge PRs from others. After reading others pointing this out my opinion of managing open source projects have significantly changed. Of course, you can entertain PRs and see if the idea behind them is sound but not accept the raw code from others and implement the features they way you envision instead. Keep in mind it’s always possible to have a vulnerability without anyone else’s assistance. This is especially true if you use dependencies, as you don’t keep track of every line of code they add. > This is especially true if you use dependencies, as you don’t keep track of every line of code they add. You absolutely should vendor your dependencies and review them before accepting the new version. Even though they are dependencies, you are ultimately responsible for using them. "They are just dependencies" doesn't absolve you of responsibility. Great points about dependencies and reviewing PRs. In addition to manual reviews, layering security tools within your CI/CD pipeline is key. Tools like static code analyzers, dependency scanners, and security linters help catch vulnerabilities early. Open source can also be a valuable way to uncover security gaps, but having a secure channel for reporting vulnerabilities is crucial to address them quickly. Leveraging techniques like Content Security Policies (CSPs) adds extra layers of protection, promoting proactive security throughout development and deployment. For users of OS projects, a very common approach is to clone into a private repo, then only pull upstream changes within your own timeline/process, and potentially open public PRs at some point after working in private, i.e. you do your business in private, and share in the public part as & when works. For the project maintainer people can open PRs whenever they want but you are under no obligation to accept them or use any of the code; they're doing this to help others but don't need to for their own scenario. It looks like that's a PHP codebase. I'm curious why one should use this solution instead of more performant Go/Rust log backends? Also, one of the login links takes you to a 404 page: https://triplechecker.com/s/jDTmQa/txtlog.net They said > Most of my development career has been with C#, NodeJS and PHP and then > The architecture is very scalable, but I've only tested up to a few billion logs. Some people praised Go as a better language for the use case than PHP. I’d say Elixir is even better. It can handle massive concurrency easy, can be made distributed easy, has a built-in, in-memory, key-value store (ETS), and is probably the best high-level language for anything that’s facing the network. I've really been interested in learning more about Elixir and how it accomplishes these things, because I constantly hear the same opinions from others. Do you have some good resources you'd recommend for getting started with Elixir for a principal engineer that wants to understand these at-scale issues and how Elixir solves them better than other languages? Yes, two books. To get a feel for the language - “Elixir in Action” by Sasa Juric. To discover how Elixir and the platform it’s built on excel in scalability and fault-tolerance - “Designing for Scalability with Erlang/OTP” by Francesco Cesarini. What in the world does this mean? https://txtlog.net/doc#:~:text=use%20your%20local%20time%20w... That's made twice as bad by the "we throw away Z because you were just kidding by including it". That leads me to believe that any RFC 3339 that isn't automatically Z (e.g. 1996-12-19T16:39:57-08:00 <https://datatracker.ietf.org/doc/html/rfc3339#section-5.8>) is ... well, I don't know what it's going to do but it likely won't be good It also appears that your documentation is currently a very verbose version of an OpenAPI spec, so you may save your readers some trouble by actually publishing one, with the added advantage that they come with a "Try it" button in the OpenAPI renders That would allow you to save the natural language parts for describing things that are not API-centric (such as the "but WWWWHHHHYYY mysql AND clickhouse" that you alluded to elsewhere but wasn't mentioned at all in /doc nor /selfhost) The date treatment isn't great, but the repo seems to indicate it's existed as a public thing for 22 days. So perhaps just an early compromise to get it working. For all the folks championing how awesome PHP is in this thread, one would surely hope it has rfc3339 aware date parsing, no? But I guess that <https://www.php.net/manual-lookup.php?pattern=rfc%203339&sco...> and <https://www.php.net/manual-lookup.php?pattern=iso8601&scope=...> both being :shruggle: doesn't do it any favors. However, it seems it is just a search stupidity because https://www.php.net/manual/en/datetimeimmutable.createfromfo... I do love this, since it 100% squares with my mental model of PHP's approach to life: you're holding it wrong https://www.php.net/manual/en/function.date-parse-from-forma... Given the tone and wording of your comments I hesitated to even reply but, alas, my love for PHP was strong enough to push me through. You are, actually, doing it wrong. https://carbon.nesbot.com/docs/ I forgive you, being that you're clearly not familiar with modern PHP and it's incredibly mature and diverse library ecosystem and first class package manager. > However, it seems it is just a search stupidity ... You're searching a list of thirty (30) functions. I don't even know how you found that list of functions but, surely, you don't think that's an exhaustive place to search for a specific date format? Surely you're not being purposely obtuse. (As you likely found, if you just plop your search term in the search at the top of the PHP website you would have found the DateTime class and how to handle these various formats) Anyway - for anyone who may happen across this odd chain of comments, dealing with dates in PHP is an actual breeze using Carbon\Carbon. Off-topic, but thanks for the neat trick with It's actually a standard! https://developer.mozilla.org/en-US/docs/Web/Text_fragments It can do a bunch of awesome stuff, but the text= one is the one I use the most I finally started using it when it landed on Firefox release (although, in true Firefox fashion, they give no fucks about the UX forcing me to install an extension that is "create link to selection") I too must thank you for this, I had no idea this existed and likely will be making regular use of it now :) Pretty unrelated, but i like how it displays large amount of potentially diverse JSON events. Would need some better filtering and sorting, hiding of keys etc.
Products which do this well are Elastic and Splunk, but are too heavy for my taste. I always played with the idea that the logs could be viewed as packets of some protocol and use wireshark to filter them and view related logs as a “stream” like view that wireshark provides the 'easy to use' / 'view' was very nice. if you could add the actual session logs in it would be amazing. This is nice. At work, we use Datadog for logging, and I have previously used CloudWatch, Splunk, and Honeycomb. Among these, only Honeycomb makes implementing canonical log lines [1] easier. I want arbitrarily wide, structured logs [2] without paying exorbitant costs for cardinality. Our Datadog costs are outrageous, and it seems like no one cares at this point. Pydantic Logfire is also doing some good work in Python-specific environments. I use both Python and Go, but Logfire wasn’t as ergonomic in Go. [1]: https://stripe.com/blog/canonical-log-lines [2]: https://www.honeycomb.io/blog/structured-events-basis-observ... My current log solution that is based on Clickhouse I’m tinkering with in free time in Victorialogs. https://docs.victoriametrics.com/victorialogs/ Very nice. A lot of the complexity you described is why I've settled on using CloudWatch logs for anything I have on AWS. I don't need a fancy UI, just a powerful querying language for investigation and debugging. With that said, it would be nice to see at least some mechanism for building aggregates queries (for example, 4* results in the last 24 hours by user) but if it's ClickHouse underneath, I assume that's easy using standard ClickHouse tools. I hate how Cloudwatch itself is so fragmented, and they have three different query languages for logs. It’s all cognitive overhead I don’t want to learn. I will say that the language isn't the most intuitive, and a project like this one with some simply querying with the (presumed) ability to drop down to SQL for power use is probably the ideal solution. (Doable with CloudWatch logs and Athena, but that's another can of complex worms) I would be happy to pay a premium for a better cloudwatch. For me it is always not intuitive which I am sure is driven by limited use. We need glorified (rip)grep instead of ELKS and friends, which have huge learning curve. I welcome this effort. I'm pretty satisfied with Loki. It just ingests the logs and offers a powerful query language to extract data at query time (e.g. parse JSON, run regexes, plot). It can store data in a local folder or S3-compatible storage. I also gave up on configuring ELK in the past... Don't tell me why, but I've developed an instinct that recognizes solutions that use Clickhouse under the hood :) Seems to be using MySQL instead? https://github.com/WillieBeek/txtlog/blob/master/txtlog/data... Tell us more! It uses both, MySQL for the metadata and ClickHouse for the logs. The selfhost page explains a bit more about the architecture. edit: the connection to ClickHouse uses the MySQL driver, this is actually a very nice CH feature, you can connect to CH using the regular mysql or postgresql client tools. The PHP MySQL PDO driver works seamlessly. One catch, using advanced features like CH query timeouts requires a CTE function, check the model/txtlogrowdb.php file if you're interested. I've heard good things about Axiom[0], especially for high scale needs. If you like them, please submit the link on its own, and not to take away from someone's MIT "Show HN" to plug a non open source project This looks very interesting! My suggestion for the self-hosting is to create docker images and use docker-compose. The self-hosting currently is a bit of effort to setup. I also wonder if PHP is a good language for this. For the UI, yea that's fine and makes sense. But for the log processor that's going to need to handle a high throughput which PHP just isn't good at. For the same resources, you can have Go doing thousands of requests per second vs PHP doing hundreds of requests per second. > PHP doing hundreds of requests per second. You may want to update your understanding of PHP and Go's speed . Both of your estimates are off by a couple orders of magnitude on commodity hardware. There are also numerous ways to make PHP extremely fast today (e.g. swoole, ngx_php, or frankenphp) instead of the 1999 best practice of apache with mod_php. Go is absolutely an excellent choice, but your opinion on PHP is quite dated. Here are benchmarks for numerous Go (green) and PHP (blue) web frameworks: https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s... Sure, PHP can process logs of any volume, but it would require 5–10 times more servers to handle the same workload as something like Go. Not to say Go just works out of the box while for PHP you must set up all those additional daemons you listed and make sure they work -- more machinery to maintain, and usually they have quite a lot of footguns, too. Like, recently our website went down with just 60 RPS because of a bad interaction between PHP-FPM (and its max worker count settings) and Symfony's session file locks. For Go on a similar machine 60 RPS is nothing, but PHP can already barely process it, unless you're a guru of process manager settings. In a different PHP project, we have a bunch of background jobs which process large amounts of data, and they routinely go OOM because PHP stores data in a very inefficient way compared to Go. In Go, it's trivial to load hundreds of thousands objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k. So we have to have smaller batches (= make more API calls), and the processing itself is much slower as well. And you can't easily parallelize without lots of complex tricks or additional daemons (which you need to set up and maintain). It's just more effort, more waste of time and more RAM/CPU for no particular gain. > In Go, it's trivial to load tens of objects into memory to quickly process them, but PHP already starts falling apart before we hit 100k. I'm not going to argue that PHP is _better_ than Go. Just starting off with that. But if your background jobs are going OOM when processing large amounts of data it's likely that there's better ways to do what you're trying to do. It is true that it's easy to be lazy with memory/resources with PHP due to the assumption that it'll be used in a throwaway fashion (serve request -> die -> serve request -> die) - but it's also perfectly capable of long-running/daemonized processes that aren't memory issues rather trivially. This isn't a PHP problem, this is a configuration problem. You shouldn't be using the filesystem to handle your sessions in a production application. Anything that unexpectedly blocks a process can bring down your entire PHP server because you will run out of worker processes. For example, imagine you experience a spike in requests while another server you're trying to call is timing out. You can't set the maximum worker count to a very high value because the operating system has an upper limit. Since the limit must remain low enough, you can quickly run out of your worker processes. In contrast, Go can efficiently manage thousands of such blocked goroutines without issue. Sure, you can address this problem in PHP, but you need: - understand PHP-FPM (or whatever you use) configs and their footguns - understand NGINX configs and their footguns - fiddle with PHP configs/optimizing your code to fit within PHP's maximum limits - rent larger servers to have the same throughput True. I stand corrected. This is a footgun, regardless of if it's a block from file systems or remote requests or whatever. My claim that it's a configuration problem is just a 'fix' and there are ultimately an unlimited list of ways this same thing can come up to bite you. Well, outside of aggressive timeouts - and even then, with enough volume of requests that's even not going to save you :D What you're talking about is generally not considered production-ready. While you can use these tools you will almost certainly run into problems. I know this because as an active PHP developer for over a decade I'm very much paying attention to that field of PHP. What we see here is a classic case of benchmarks saying one thing when the reality of production code says something else. Also, I used go as a generic example of compiled languages. But what we see is production-grade Go languages outperforming non-production-ready experimental PHP tooling. And if we go to look at all of them https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s... We'll see that even the experimental PHP solution is 43 and being beat out by compiled languages. > ... you can have Go doing thousands of requests per second vs PHP doing hundreds of requests per second. > I know this because as an active PHP developer for over a decade I'm very much paying attention to that field of PHP. <insert swaggyp meme here> As an active PHP developer as well it sounds like you have no idea what you're talking about. > While you can use these tools you will almost certainly run into problems. Which tools are "generally not considered production-ready"? From what I'm seeing on the linked list of benchmarks... - vanilla php
- workerman
- ubiquity
- webman
- swoole I'd venture to bet all of these are battle tested and production ready - years ago now. As someone who has built a handful of services that ingest data in high volume through long-running PHP processes... it's stupidly easy and bulletproof. Might not be as fast as go, but to say these libraries or tech isn't production-ready is rather naive. Nobody is suggesting PHP beats compiled. We’re arguing with you about your utter lack of expertise in the language, knowledge of the ecosystem and “production-ready” status of the many options, and your overall coding ability when it comes to PHP. > Nobody is suggesting PHP beats compiled. Actually, there seems to be people arguing that. > We’re arguing with you about your utter lack of expertise in the language, knowledge of the ecosystem and “production-ready” status of the many options, and your overall coding ability when it comes to PHP. If you're doing that with benchmarks you're doing a shitty job. My numbers came from experience in production environments with production workloads. Not to mention that you're talking experimental tooling as examples. I've literally seen multiple companies try to use FrankenPHP. Not one even made it to QA aka it broke because during the dev testing. Again, you don't have the slightest clue what you're talking about. There are numerous production-ready choices that myself and others have mentioned. PHP trivially scales up to multiple nodes behind an LB. You're really only limited by your backend storage connection count and throughput. Go and friends may make for more efficient resource utilization, but it will be marginal in the grand scheme of things unless there are plans to do massively different things. As it is this code is very simple. I haven't used PHP in 15 years and I was able to trace through this from front-end to back-end in less than 3 minutes. To me it look like a really great level of complexity for the problem it solves. Keep it up, OP. You can but that costs more money... > Keep it up, OP. Live in the real world. No one wants to have a fleet of servers for their logging infra when there are options to run it on a single server. Thanks for the tip, I will check if inserting rows with Go is any faster. For reference, inserting a log takes three steps, first the log data is stored in a Redis Stream (memory), a number of logs are taken from the stream and saved to disk and finally inserted in batches in ClickHouse. I've created it so you can take the ClickHouse server offline without losing any data (it will be inserted later). For reference, moving about 4k logs from memory to disk takes less than 0.1 second. This is a real log from one of the webservers: Start new cron loop: 2024-12-18 08:11:16.397...stored 3818 rows in /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc (0.0652 seconds). Storing this data in ClickHouse takes a bit more than 0.1 second: Start new cron loop: 2024-12-18 08:11:17.124...parsing file /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc * Inserting 3818 row(s) on database server 1...0.137 seconds (approx. 3021.15 KB). * Removed /var/www/txtlog/txtlog/tmp/txtlog.rows.2024-12-18_081116397_ES2gnY3fVc As for Docker, I'm too much of a Docker noob but I appreciate the suggestion. On the other side some people (me) are happy to have an actual self hosting setup and not being forced to use a docker setup with unknown overhead. Why not both? It's not much trouble to publish a Dockerfile while still documenting a normal installation. It uses Clickhouse, though, which should be xtremelly fast for this. Yes. But PHP still needs to process it before it goes to Clickhouse. PHP is the bottleneck. If that "bottleneck" is thousands of requests per second then it doesn't really matter for smaller deployments does it? (Which seems to be the target audience and not FAANG) I'm not a big fan when folks call out languages as bottlenecks when they have no proof on the actual overhead and how much faster it would be in another language. To tweak a PHP deployment to handle hundreds of requests per second which is very very realistic for a basic logging for a mid-sized application you're looking at having a very beefy server setup. Most PHP deployments barely reach a hundred per server. And this is an open source project is should be designed to handle basic production workloads which it could but it'll cost you a bunch more than if you used the correct languages. > I'm not a big fan when folks call out languages as bottlenecks when they have no proof on the actual overhead and how much faster it would be in another language. Honestly, I thought it was so obvious that an interpreted language is not good for high throughput endpoints that it didn't need to be proven. I also thought it was obvious that a logging system is going to handle lots and lots of data. It could be easily proven by doing a bunch of work but obviously there is no point in me proving it. Well, looking at our bespoke logging system in PHP handling some 15-20+ million log entry's per day on a virtualized dual-core system... it's mostly disk I/O on the underlying MySQL database (currently duplicating to Clickhouse where we'll eventually store everything). And that is central application logging for about 100 servers (think syslog), some 400 "microservices" (parts of a larger application), and a handful of backend systems. We're running out of disk space earlier than that PHP is a bottleneck here. > To tweak a PHP deployment to handle hundreds of requests per second which is very very realistic for a basic logging for a mid-sized application you're looking at having a very beefy server setup. There's just no way that you're at all familiar with PHP of the last 10 years to think this is true. > It could be easily proven by doing a bunch of work but obviously there is no point in me proving it. Prove it. Please, show me the context and environment you think PHP would struggle to serve "hundreds of requests per second". I'd venture a bet that a plain Laravel installation on the cheapest digital ocean droplet would top this and Laravel is "slow" in relation to vanilla PHP. I rebuilt durable-functions in php. Durable Functions is a C# actor model runtime. My PHP implementation meets or exceeds the same benchmarks as the C# version. > It could be easily proven by doing a bunch of work but obviously there is no point in me proving it. Because you cannot prove it... :) I wrote this post a few years ago, that actually spurned some improvements in C# ... so here you go: https://withinboredom.info/2022/03/16/yes-php-is-faster-than... No, it's because I've got productive things to do that do benchmarks that have already been done repeatedly. The only way to get PHP to the same speed as compiled languages for web requests is to use experimental tooling. I notice your benchmarks are over 10 runs?! That's not a good sample size. And even more importantly, it's not in the same context. Sure once you compile PHP and have it running it'll run fast. But PHP has a very specific usage which is web applications. It's been well-known for years that PHP's performance issues are related to the fact it's an interpreted language that has to be interpreted everytime but if you compile and run repeatedly it can perform extremely well. Which is why every performance related PHP nerd is working on experimental tools to do that. > That's not a good sample size. Like I said in the blog post, if I tell you the sky is blue and you don't believe me; run them yourself. FWIW, C# is faster now for that particular use case. Also, like I mentioned in a previous blog post ... which one would you rather maintain: - https://github.com/TheAlgorithms/C-Sharp/blob/master/Algorit... -- merge sort in C# 130 lines - https://www.w3resource.com/php-exercises/searching-and-sorti... -- merge sort in PHP 60 lines PHP is often far more concise than C#, and many other languages. I code more in Go than C# or PHP these days, but even Go has its limitations where it would be easier to express in PHP than Go. There are even certain classes of algorithms that are butt-ugly in Go but quite pretty in PHP. PHP is still my favorite language, even though I hardly get to use it these days. > PHP has a very specific usage which is web applications. Originally, yes. But it outgrew that about 10 years or so ago. It's much more general purpose now.[1][2] [1]: https://nativephp.com/ -- desktop applications in php [2]: https://static-php.dev/ -- build self-contained, statically compiled clis written in php The C# to PHP comparison is not fair, as the link you gave for the C# code uses abstractions to support "arrays" that could also be backed by file storage. An equivalent translation of the PHP code is about 60 lines as well, before applying any code golfing (and including comments and whitespace). You do realize that you are comparing two different implementations with different type systems that use different abstractions? Clearly you can't be serious. So, unless you are being intentionally misleading, this raises questions about the quality of "PHP solution" that is being worked on.
hk1337 - 2 days ago
piterrro - a day ago
adriand - 2 days ago
gabeio - 2 days ago
withinboredom - a day ago
dlln - a day ago
skeeter2020 - a day ago
TripleChecker - a day ago
giraffe_lady - a day ago
nesarkvechnep - 2 days ago
lukevp - a day ago
nesarkvechnep - a day ago
mdaniel - 2 days ago
tyingq - 2 days ago
mdaniel - 2 days ago
Implicated - a day ago
voytec - 2 days ago
url#:~:text=blah
mdaniel - 2 days ago
Implicated - a day ago
dobin - 2 days ago
szundi - 2 days ago
thomquaid - 2 days ago
rednafi - 2 days ago
reacharavindh - 2 days ago
bdcravens - 2 days ago
stingraycharles - 2 days ago
bdcravens - 2 days ago
infecto - 2 days ago
majkinetor - 2 days ago
remram - 2 days ago
drchaim - 2 days ago
stingraycharles - 2 days ago
HatchedLake721 - 2 days ago
williebeek - 2 days ago
mooreds - 2 days ago
mdaniel - 2 days ago
that_guy_iain - 2 days ago
hipadev23 - 2 days ago
kgeist - a day ago
Implicated - a day ago
Implicated - a day ago
kgeist - a day ago
Implicated - a day ago
that_guy_iain - a day ago
Implicated - a day ago
hipadev23 - a day ago
that_guy_iain - 19 hours ago
hipadev23 - 10 hours ago
ryanianian - 2 days ago
that_guy_iain - 19 hours ago
williebeek - 2 days ago
herbst - 2 days ago
xinu2020 - 2 days ago
majkinetor - 2 days ago
that_guy_iain - 2 days ago
axelthegerman - 2 days ago
that_guy_iain - 2 days ago
mrngm - a day ago
Implicated - a day ago
withinboredom - a day ago
that_guy_iain - a day ago
withinboredom - a day ago
phillipcarter - a day ago
neonsunset - a day ago