Show HN: Now I Get It – Translate scientific papers into interactive webpages

nowigetit.us

115 points by jbdamask 6 hours ago


Understanding scientific articles can be tough, even in your own field. Trying to comprehend articles from others? Good luck.

Enter, Now I Get It!

I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.

Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.

Free for now - it's capped at 20 articles per day so I don't burn cash.

A few things I (and maybe you will) find interesting:

* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.

* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.

* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.

* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.

* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.

hackernewds - 13 minutes ago

"daily limit reached" on first attempt :/

jbdamask - 2 hours ago

Someone processed a paper on designing kindergartens. Mad props for trying such a cool paper. Really interesting how the LLM designed a soothing color scheme and even included a quiz at the end.

https://nowigetit.us/pages/9c19549e-9983-47ae-891f-dd63abd51...

vunderba - 3 hours ago

Nice job. I have no point of comparison (having never actually used it) - but wasn't this one of the use-cases for Google's NotebookLM as well?

Feedback:

Many times when I'm reading a paper on arxiv - I find myself needing to download the sourced papers cited in the original. Factoring in the cost/time needed to do this kind of deep dive, it might be worth having a "Deep Research" button that tries to pull in the related sources and integrate them into the webpage as well.

eterps - an hour ago

https://nowigetit.us/pages/8cf08b76-c5bc-4a7b-bdb4-a0c15089e...

The actual explanation (using code blocks) is almost impossible to read and comprehend.

throwaway140126 - 4 hours ago

A light mode would be great. I know that many people ask for a dark mode for the reason that they think that a light mode is more tiring than a dark mode but for me it is the opposite.

swaminarayan - 2 hours ago

How do you evaluate whether users actually understand better, rather than just feel like they do?

ukuina - 3 hours ago

Neat! I've previously used something similar: https://www.emergentmind.com/

leke - an hour ago

Do you happen to know if LLMs have issues reading PDFs? Would they prefer EPUB format for example?

leetrout - 5 hours ago

Neat!

Social previews would be great to add

https://socialsharepreview.com/?url=https://nowigetit.us/pag...

toddmorey - 3 hours ago

I’m worried that opportunities like this to build fun/interesting software over models are evaporating.

A service just like this maybe 3 years ago would have been the coolest and most helpful thing I discovered.

But when the same 2 foundation models do the heavy lifting, I struggle to figure out what value the rest of us in the wider ecosystem can add.

I’m doing exactly this by feeding the papers to the LLMs directly. And you’re right the results are amazing.

But more and more what I see on HN feels like “let me google that for you”. I’m sorry to be so negative!

I actually expected a world where a lot of specialized and fine-tuned models would bloom. Where someone with a passion for a certain domain could make a living in AI development, but it seems like the logical endd game in tech is just absurd concentration.

jbdamask - 2 hours ago

I see a few people trying to process big papers. Not sure if you're seeing a meaningful error in the UI but the response from the LLM is, "A maximum of 100 PDF pages may be provided"

lamename - 5 hours ago

I tried to upload a 239 KB pdf and it said "Daily processing limit reached".

jbdamask - 3 hours ago

Lots of great responses. Thank you!

I increased today's limit to 100 papers so more people can try it out

armedgorilla - 5 hours ago

Thanks John. Neat to see you on the HN front page.

One LLM feature I've been trying to teach Alltrna is scraping out data from supplemental tables (or the figures themselves) and regraphing them to see if we come to the same conclusions as the authors.

LLMs can be overly credulous with the authors' claims, but finding the real data and analysis methods is too time consuming. Perhaps Claude with the right connectors can shorten that.

fsflyer - 5 hours ago

Some ideas for seeing more examples:

1. Add a donate button. Some folks probably just want to see more examples (or an example in their field, but don't have a specific paper in mind.)

2. Have a way to nominate papers to be examples. You could do this in the HN thread without any product changes. This could give good coverage of different fields and uncover weaknesses in the product.

cdiamand - 4 hours ago

Great work OP.

This is super helpful for visual learners and for starting to onboard one's mind into a new domain.

Excited to see where you take this.

Might be interesting to have options for converting Wikipedia pages or topic searches down the line.

BDGC - 4 hours ago

This is neat! As an academic, this is definitely something I can see using to share my work with friends and family, or showing on my lab website for each paper. Can’t wait to try it out.

DrammBA - 4 hours ago

> I could just as well use a saved prompt in Claude

On that note, do you mind sharing the prompt? I want to see how good something like GLM or Kimi does just by pure prompting on OpenCode.

ajkjk - 3 hours ago

cool idea

probably need to have better pre-loaded examples, and divided up more granularly into subfields. e.g. "Physical sciences" vs "physics", "mathematics and statistics" vs "mathematics". I couldn't find anything remotely related to my own interests to test it on. maybe it's just being populated by people using it, though? in which case, I'll check back later.

- 3 hours ago
[deleted]
TheBog - 3 hours ago

Looks super cool, adding to the sentiment that I would happily pay a bit for it.

alwinaugustin - 2 hours ago

There is a limit for 100 pages. Tried to upload the Architectural Styles and the Design of Network-based Software Architectures (REST - Roy T. Fielding) but it is 180 pages.

onion2k - 4 hours ago

I want this for my company's documentation.

jbdamask - 2 hours ago

The app may be getting throttled. If you're waiting on a job, check back in a bit.

sean_pedersen - 3 hours ago

very cool! would be useful if headings where linkable using anchor

Vaslo - 4 hours ago

I’d love if this can be self-hosted, but i understand you may want to monetize it. I’ll keep checking back.

croes - 4 hours ago

Are documents hashed and the results cached?

enos_feedler - 6 hours ago

can i spin this up myself? is the code anywhere? thanks!

relaxing - 30 minutes ago

I picked the “Attention is All You Need” example at the top, and wow it is not great!

Didn’t take long to find hallucination/general lack of intelligence:

> For each word, we compute three vectors: a Query (what am I looking for?), a Key (what do I contain?), and a Value (what do I give out?).

What? That’s the worst description of a key-value relationship I’ve ever read, unhelpful for understanding what the equation is doing, and just wrong.

> Attention(Q, K, V) = softmax( Q·Kᵀ / √dk ) · V

> 3 Mask (Optional) Block future positions in decoder

Not present in this equation, also not a great description of masking in a RNN.

> 5 × V Weighted sum of values = output

Nope!

https://nowigetit.us/pages/f4795875-61bf-4c79-9fbe-164b32344...

nimbus-hn-test - 5 hours ago

[dead]