Firebase Studio
firebase.studio296 points by sumitkumar 9 days ago
296 points by sumitkumar 9 days ago
Here's my short review after playing around with Firebase Studio for ~30 minutes. First of all, I had to turn off Firefox Enhanced Tracking Protection because otherwise projects wouldn't load.
I gave it the following initial prompt:
> An app where you input a question, then flip some coins to generate an I Ching prediction, and it generates a prediction / fortune for you. Then this combination of results can be fed to Gemini AI to produce a more detailed prediction text.
It generated something that looked fine. When I input a question and press the button nothing happened. After asking it to fix the problem multiple times and having it fail, I looked at the browser console to figure out the errors it was getting. Then I copied those errors and told it to fix them. After a few iterations, it solved every error and would generate a result. It completely forgot the part where you are supposed to flip coins before getting a hexagram to generate a fortune. After a bit of prompting, I was able to get it to display the hexagram and input question. However, sometimes it becomes confused about which hexagram was generated.
Overall, my impression is that these tools are still in the toy novelty stage rather than something you'd want to use for anything important.
Here is a screenshot of the app output for the question: Will Hacker News like my vibe coded oracle? [0] As you can see, it says that the generated hexagram is 24 or 41, but in the fortune text below it says 11.
I built a complete working application (errortexts.com) using an AI tool, so I have a little insight on this.
At first, the product I was using (lovable.dev) seemed to me exactly as you described. I gave it a basic app outline and hit run, and it produced something that superficially looked right but did nothing.
So I asked some other people for advice, and they said you have to hold its hand and go step by step. So I did.
I told it, give me a landing page that matches [product description], but implement nothing else. Then, ok, let's set up auth - add a sign in and sign up dialog. Then, ok, let's create a user account page. Bit by bit.
It succeeded wildly. I was able to build the whole thing in 3 days. I'm not capable of that on my own, it would have taken me 3 weeks. Sometimes the AI got stuck and I had to manually go in and accomplish what I wanted. It took over 100 steps to complete the product, and probably around 10-20 times I had to revert its changes and give it more specific instructions. I had to check its work at every iteration, just like with a junior developer.
But it worked. And it's going to get better. Would I use this for "something important"? Depends how you define that. I used it to build a working product. Would I start letting it modify an existing mature codebase willy-nilly? No, probably not. Would I let it write cryptographic logic or trust that it wrote bulletproof code from a security standpoint in a sensitive context? No.
But for a simple application, it was an incredibly powerful tool. Especially for something that didn't even exist just 2 years ago. Give this a decade and it's going to change all our careers even more than it already has.
Can you... provide the prompts? Because I have a hard time believing this.
I have tried the hand holding approach with Cursor. It doesn't work for me. I have to constantly correct and over correct. Getting auth working sounds insane to me.
What exactly surprises you? You should try Gemini 2.5 Pro in Google AI Studio. Set the temperature to around 0.3, and in the system prompt, tell it to only edit exactly what you ask for and nothing else.
This model works really well. For example, simple things like [1], [2], and [3] can apparently be generated with just a couple of prompts.
[1] https://koreanrandom.com/en/games/through-the-space/
[2] https://koreanrandom.com/en/games/tanchiki/
[3] https://koreanrandom.com/en/games/sharik/
According to the author, these were made with Gemini 2.5 Pro without any manual coding by a human.
Cursor isn't as powerful as Gemini in AI Studio because AI Studio gives you full control over the model's settings and how it processes code.
Plus, the massive 1 million token context window is incredibly helpful for working with large codebases. You can use tools like code2prompt and repomix to feed all the necessary context into AI Studio from the clipboard for those projects.
I just got auth working with JWT through total vibecoding, using Claude+RooCode. Other bits of the app I needed a couple of tries, but auth worked immediately. I guess these models have seen express + node + JWT a million times.
I don't have access to the precise prompt, but I told it something like "only implement basic authentication based on JWT, using just email + password.", then asked it to add a simple registration form, then the password reset flow... step by step, but with little guidance.
At every prompt I review the changes on git, and commit.
I have had Claude Code build authentication before successfully, following a pattern approximately like this: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/
Having just figured auth recently for an app I’m building, I’ll echo that sentiment. Maybe lovable has auth components pre-built and well integrated
I love the Java code sample:
```
// It works, but we have no idea how to send an HTTP request in java.
// If you do, send us a nice example to support@errortexts.com.
// You just need to POST JSON with the keys `api_key` and optionally `message`.
```
That's brilliant. But it does make me wonder why the LLM couldn't have provided you a suitable Java code example?
I wondered the same. Those code examples took an hour or so to figure out. It's a very basic exercise, sending an HTTP request, but doing it in nine languages, the majority of which you've never used, requires a little research. The llm can generate them all, but I had to run them locally to make sure they worked, and they oftentimes didn't. Most of them were easy and only took a minute or two to figure out, but after an hour and a half I gave up on Java. I literally could not figure it out. I'm sure I could have if I went long enough, but at some point you just have to cut your losses and decide it's not worth your time. This was not going to make or break my product offering, I'm not trying to learn Java, and I just wasn't interested in spending more time trying to figure it out. The llm was happy to generate examples, but they didn't work.
With these tools you have to talk to them as if you were talking to a knowledgeable, but a bit clueless junior developer. Sometimes it's almost as if you were coding it, just without typing the code.
I put your prompt into the vibe coding tool I'm working on (shameless plug).
The first version[0] looked good, but when I inspected it I found that it just picked an I Ching prediction at random on the back-end, instead of actually flipping coins.
I updated the prompt to the following:
> Create app where you input a question, then flip coins to generate an I Ching prediction (client-side). First show the standard I Ching prediction and it's hexagram, and then use AI to generate a fortune analysis based on the prediction and your initial question.
And the result was much more laborious[1] of a UI :shrug:
The ai part of the app is basically useless. After 2 hours of “vibe coding” a chess clock flutter app I got basically nothing in the end.
It broke more and more each message. I tried fixing stuff myself but it would mess it up again. Would not recommend anyone to use it.
Now for the non ai part: super cool. I love the nix environment. Its fascinating how they handle the previews for example. I got geekbench up and running an the cpu is a bit worse than an iphone 15 pro max, but it has 32 gigs of ram!
[ I work on Dart at Google. ]
The app prototyping logic in Firebase Studio isn't wired up for Flutter/Dart yet. You can play with Gemini+Dart/Flutter here: https://dartpad.dev/?channel=main.
We're working with the Firebase Studio team to integrate. FWIW, it seems to do fairly well with "Create a chess clock app".
Hey. That makes more sense. To be honest, I don’t think it should be one of the main things on the front page when creating a new project in that case. However for what it’s worth Ive also asked gemini to translate this simple chess clock app to react because of how buggy the flutter project was and it just got stuck again and again no matter how much I helped it
[Full disclosure, work @ Firebase]
But try this:
Open up a blank Flutter template in Studio: https://studio.firebase.google.com/new/flutter
After everything initializes and your Android and web previews set up, open chat by clicking the little Gemini spark at the bottom of the workspace and then add your prompt.
YMMV, but I got a very basic, but working chess clock in one shot with "Can you replace this sample Flutter project with a fully-functional chess clock that works on Android and on the web?"
That's how I've done it when it did not work. I did, however, go easier, one by one. I first made a simple countdown. Then I made it count down from 10 minutes to 0. Then I created 2 of them. Then I put them side by side. Then I did the logic, one counterstarting each other. Every step of the way, it was filled with loads of errors and loops out of which the ai couldnt get out, to the point where I would need to go there and fix it myself. Errors ranged from logical, to super simple syntax issues, such as missing colons, or brackets.
I have tried again step by step with what you said here, copy pasted the prompt even and after a couple changes here and there it loses the plot and gets stuck and seemingly no amount of prompting can get it out.
To make sure I tried 3 times in a row. Same result.
I feel like in the emperors new clothes story.
Well, not conpletely Emperor’s New Clothes: after I got my basic chess clock, I asked for skeuomorphic changes…I got a fantastic “next steps” summary in return…but when I said “Go on ahead, your choice,” it created an incompatible dependency nest trying to add clicky sounds.
Gemini actually dug me out of it, though, with additional prompting (this kind of surprised me; I assumed I’d have to eventually really step in). After all that distraction, though, I had to reset the convo to get it to focus on the files again.
Totally understand that you might be too annoyed at this point, but if you do have a few minutes, it’d be amazing to file a case @ https://firebase.google.com/support/troubleshooter/studio for the team to dig into.
As a person currently working on a project that uses Firestore (the db component of Firebase), there is one thing - and only one thing - I want.
A web GUI for Firestore that lets me work on documents like, idk, any other DBMS GUI would: the ability to select multiple records, and operate on them.
That's literally it. I don't need AI, I don't need dark mode, I don't even need MongoDB compatibility. I just want to select multiple documents with my mouse and do things to them.
I built exactly this in less than a day using Windsurf.
Come up with your data model, explain it to the AI, and tell it to give you a CRUD for it.
I'm pretty sure it's perfectly doable to ask it to give you a dynamic crud based on the "shape" of the data in Firestore.
Sadly it's an internal tool for work, so I can't share.
That’s actually a good prompt
"Now that you've been promoted, you don't build CRUD tools anymore. Those are below your level. Instead, you build AI agents that build the CRUD tools."
For me the biggest missing block is the text search API. It's ridiculous that you can't add a basic search input to your Firebase-based website unless you use TypeSense, Algolia or some other additional database that you have to manage and keep in sync.
Despite all the recent enshitification, I can't think of any alternative solution that would come even close to what Firebase has to offer. Authentication API is especially hard to beat (cheap and very easy to integrate).
Check out fuegoapp.dev — it comes with a bunch of handy tools to manage Firestore and Firebase users
Have you tried Rowy.io?
I've generally looked around.
Looking at that, I'm not sure corporate would be down for something like that as a solution for "select multiple documents in a database GUI and operate upon them". Could be wrong though.
It's a rebranding of https://idx.google.com/
I was under the impression that idx was formerly just the editor, but I guess that's changed.
First off, this looks really cool and I'm excited to see more things like this.
The overall chat in the HN conversation has got me thinking, though.
Around 7 years ago in my career, one of my most common actions for one-off scripts was for me to create a WinForms application with, often, a couple text boxes and a "Run" button of some sort.
The text boxes would be the inputs and the run button would ... run. There was also often like a text output or bunch of loglines or something. I wrote almost exclusively in C# at the time, so it was a way to shove a bunch of C# code into place and test it.
I did this for random and arbitrary things I needed to process or solve, a lot like how I used Python or Ruby in the future.
I bet it's actually pretty common for people to need "a script that does a thing", and I think, maybe, that's where a lot of the AI scripting of the most immediate use is going to be. If it can be a familiar interface for people to build (in the past, the IDE) and a familiar or simple place to interact with the generated script (the WinForms + buttons), these programs to generate scripts and do "stuff" could likely spread pretty wide.
I think Jupyter Notebooks are another example of this, another precursor, of sorts?
That's one use case where LLMs really help me, one-off scripts where I know exactly what I need done, know it's possible but would take me much longer to brush off my Bash, Python, etc. skills to write it. Give the LLM a prompt, let it write the scaffolding, do the tweaks I need, and iterate over with the LLM if I forgot how exactly it was to write a for-loop in Bash.
Software engineers, who are the most skilled in terms of holding ai's hand to create a product, should be cloning every single saas out there and making money by eating a share of the market. AI is a great way for engineers to become founders. Let's bring the competition.
5 years working at startups has taught me that go-to-market is almost always the most important function. There are very few truly novel spaces and apps. You can clone whatever you want, but you'll get nowhere without a solid GTM.
That applies to new products not clones. AI can also help with that thanks.
This especially applies to clones. How do you stand out and find the right customer niche?
Eating the big fish is one great book about competing with a market leader. You're talking like go to market is rocket science. Anyone can read a book and follow the steps. Many don't.
> You're talking like go to market is rocket science.
It is absolutely a data-driven science when done right.That's all good. I didn't say cloning a saas using AI is everything that's needed. Once a software engineer has cloned a saas they can do go to market science, if that' what you want to call it
If you think that latter part is easy, then you should definitely go clone a SaaS and exit with millions. You have all the tools at your disposal to clone any app that you want now with minimal effort. Be a 1 man team, no overhead. Undercut on price and exit.
In fact, it's not that easy. The second part -- GTM -- is the hardest part; even if you could clone Notion, getting that product in front of customers is the hard part.