Ask HN: Share your AI prompt that stumps every model
430 points by owendarko 3 days ago
430 points by owendarko 3 days ago
I had an idea for creating a crowdsourced database of AI prompts that no AI model could yet crack (wanted to use some of them as we're adding new models to Kilo Code).
I've seen a bunch of those prompts scattered across HN, so thought to open a thread here so we can maybe have a centralied location for this.
Share your prompt that stumps every AI model here.
"Tell me about the Marathon crater." This works against _the LLM proper,_ but not against chat applications with integrated search. For ChatGPT, you can write, "Without looking it up, tell me about the Marathon crater." This tests self awareness. A two-year-old will answer it correctly, as will the dumbest person you know. The correct answer is "I don't know". This works because: 1. Training sets consist of knowledge we have, and not of knowledge we don't have. 2. Commitment bias. Complaint chat models will be trained to start with "Certainly! The Marathon Crater is a geological formation", or something like that, and from there, the next most probable tokens are going to be "in Greece", "on Mars" or whatever. At this point, all tokens that are probable are also incorrect. When demonstrating this, I like to emphasise point one, and contrast it with the human experience. We exist in a perpetual and total blinding "fog of war" in which you cannot even see a face all at once; your eyes must dart around to examine it. Human experience is structured around _acquiring_ and _forgoing_ information, rather than _having_ information. LLMs currently have the "eager beaver" problem where they never push back on nonsense questions or stupid requirements. You ask them to build a flying submarine and by God they'll build one, dammit! They'd dutifully square circles and trisect angles too, if those particular special cases weren't plastered all over a million textbooks they ingested in training. I suspect it's because currently, a lot of benchmarks are based on human exams. Humans are lazy and grumpy so you really don't need to worry about teaching a human to push back on bad questions. Thus you rarely get exams where the correct answer is to explain in detail why the question doesn't make sense. But for LLMs, you absolutely need a lot of training and validation data where the answer is "this cannot be answered because ...". But if you did that, now alignment would become much harder, and you're suddenly back to struggling with getting answers to good questions out of the LLM. So it's probably some time off. > they never push back on nonsense questions or stupid requirements "What is the volume of 1 mole of Argon, where T = 400 K and p = 10 GPa?" Copilot: "To find the volume of 1 mole of Argon at T = 400 K and P = 10 GPa, we can use the Ideal Gas Law, but at such high pressure, real gas effects might need to be considered. Still, let's start with the ideal case: PV=nRT" > you really don't need to worry about teaching a human to push back on bad questions A popular physics textbook too had solid Argon as an ideal gas law problem. Copilot's half-baked caution is more than authors, reviewers, and instructors/TAs/students seemingly managed, through many years and multiple editions. Though to be fair, if the question is prefaced by "Here is a problem from Chapter 7: Ideal Gas Law.", Copilot is similarly mindless. Asked explicitly "What is the phase state of ...", it does respond solid. But as with humans, determining that isn't a step in the solution process. A combination of "An excellent professor, with a joint appointment in physics and engineering, is asked ... What would be a careful reply?" and then "Try harder." was finally sufficient. > you rarely get exams where the correct answer is to explain in detail why the question doesn't make sense Oh, if only that were commonplace. Aspiring to transferable understanding. Maybe someday? Perhaps in China? Has anyone seen this done? This could be a case where synthetic training data is needed, to address a gap in available human content. But if graders are looking for plug-n-chug... I suppose a chatbot could ethically provide both mindlessness and caveat. Don't use copilot, it's worse than useless. Claude understands that it's a solid on the first try. >Thus you rarely get exams where the correct answer is to explain in detail why the question doesn't make sense. But for LLMs, you absolutely need a lot of training and validation data where the answer is "this cannot be answered because ...". I wouldn't even give them credit for cases where there's a lot of good training data. My go-to test is sports trivia and statistics. AI systems fail miserably at that [1], despite the wide availability of good clean data and text about it. If sports is such a blind spot for AIs, I can't help but wonder what else they're confidently wrong about. This is a good observation. Ive noticed this as well. Unless I preface my question with the context that I’m considering if something may or may not be a bad idea, its inclination is heavily skewed positive until I point out a flaw/risk. I asked Grok about this: "I've heard that AIs are programmed to be helpful, and that this may lead to telling users what they want to hear instead of the most accurate answer. Could you be doing this?" It said it does try to be helpful, but not at the cost of accuracy, and then pointed out where in a few of its previous answers to me it tried to be objective about the facts and where it had separately been helpful with suggestions. I had to admit it made a pretty good case. Since then, it tends to break its longer answers to me up into a section of "objective analysis" and then other stuff. Thats interesting, thanks for sharing that. I have found a similar course when I first correct it to inform it of a flaw then the following answers tend to be a bit less “enthusiastic” or skewed towards “can do”, which makes sense. They do. Recently I was pleasantly surprised by gemini telling me that what I wanted to do will NOT work. I was in disbelief. I asked Gemini to format some URLs into an XML format. It got halfway through and gave up. I asked if it truncated the output, and it said yes and then told _me_ to write a python script to do it. This is my most common experience with Gemini. Ask it to do something, it'll tell you how you can do it yourself and then stop. Given that Gemini seems to have frequent availability issues, I wonder if this is a strategy to offload low-hanging fruit (from a human-effort pov) to the user. If it is, I think that's still kinda impressive. Somehow I like this. I hate that current LLMs act like yes-men, you can't trust them to give unbiased results. If it told me my approach is stupid, and why, I would appreciate it. I've noticed Gemini pushing back more as well, whereas Claude will just butter me up and happily march on unless I specifically request a critical evaluation. Interesting, can you share more context on the topic you were asking it about? > You ask them to build a flying submarine and by God they'll build one, dammit! This thing already exists? UK, Soviet Union and USA designed them. Hmm. I actually wonder is such a question would be good to include in a human exam, since knowing the question is possible does somewhat impact your reasoning. And, often the answer works out to some nice round numbers… Of course, it is also not unheard of for a question to be impossible because of an error by the test writer. Which can easily be cleared up. So it is probably best not to have impossible questions, because then students will be looking for reasons to declare the question impossible. > they never push back on nonsense questions or stupid requirements I was reminded of your comment this morning when I asked ChatGPT how to create a path mask in Rhino Grasshopper: Me: what is a path mask that will get 1;1;0;0;* and also anything lower (like 1;0;5;10 or 0;20;1;15} ? ChatGpt: Short answer: No single path mask can do that. Here's why: (very long answer) Me: are you sure I can't use greater than, less than in the masks? ChatGpt: Yes — **I am absolutely sure:** **Grasshopper path masks do *NOT* support greater-than or less-than comparisons.** Official sources and detailed confirmation: (sources and stuff) ...so I think your priors may need to be updated, at least as far as "never". And I especially like that ChatGpt hit me with not just bold, not just italics, but bold italics on that NOT. Seems like a fairly assertive disagreement to me. Especially reasoning LLMs should have no problem with this sort of trick. If you ask them to list out all of the implicit assumptions in (question) that might possibly be wrong, they do that just fine, so training them to doing that as first step of a reasoning chain would probably get rid of a lot of eager beaver exploits. I think you start to hit philosophical limits with applying restrictions on eager beaver "AI", things like "is there an objective truth" matter when you start trying to decide what a "nonsense question" or "stupid requirement" is. I'd rather the AI push back and ask clarifying questions, rather than spit out a valid-looking response that is not valid and could never be valid. For example. I was going to write something up about this topic but it is surprisingly difficult. I also don't have any concrete examples jumping to mind, but really think how many questions could honestly be responded to with "it depends" - like my kid asked me how much milk should a person drink in a day. It depends: ask a vegan, a Hindu, a doctor, and a dairy farmer. Which answer is correct? The kid is really good at asking simple questions that absolutely do not have simple answers when my goal is to convey as much context and correct information as possible. Furthermore, just because an answer appears in context more often in the training data doesn't mean it's (more) correct. Asserting it is, is fallacious. So we get to the point, again, where creativite output is being commoditized, I guess - which explains their reasoning for your final paragraph. > I also don't have any concrete examples jumping to mind I do (and I may get publicly shamed and shunned for admitting I do such a thing): figuring out how to fix parenthesis matching errors in Clojure code that it's generated. One coding agent I've used is so bad at this that it falls back to rewriting entire functions and will not recognise that it is probably never going to fix the problem. It just keeps burning rainforest trying one stupid approach after another. Yes, I realise that this is not a philosophical question, even though it is philosophically repugnant (and objectively so). I am being facetious and trying to work through the PTSD I acquired from the above exercise. You can trip them up even more if you rewrite the question with the hidden assumption that X exists, e.g.: "When was Marathon Crater discovered? I don't need an exact date - a rough estimate will be fine." OpenAI gpt-4o OpenAI o4-mini-high Amazon's Nova Pro: ```
Marathon Crater was discovered in the early 20th century, likely around the 1920s or 1930s. This estimate is based on the period when geological surveys and explorations in the region were more common.
``` https://kagi.com/assistant/8f270eab-cba3-4f39-976c-113ea4392... Likely around the 1920's or 1930's... Bro, not only is a crater discovery in those decades unlikely, the date certainly wouldn't have gone unnoted. Raw model, or the chat product? The ChatGPT app has this integrated with search. Raw model I got Grok 3 Marathon Valley does exist on Mars, and all this information is accurate to it. Gemini even correctly names it. They're just all too polite to correct the user. https://science.nasa.gov/resource/opportunitys-panorama-of-m... Sonnet 3.7: >Complaint chat models will be trained to start with "Certainly! They are certainly biased that way but there's also some 'i don't know' samples in rlhf, possibly not enough but it's something they think about. At any rate, Gemini 2.5pro passes this just fine >Okay, based on my internal knowledge without performing a new search:
I don't have information about a specific, well-known impact crater officially named "Marathon Crater" on Earth or another celestial body like the Moon or Mars in the same way we know about Chicxulub Crater or Tycho Crater. >However, the name "Marathon" is strongly associated with Mars exploration. NASA's Opportunity rover explored a location called Marathon Valley on the western rim of the large Endeavour Crater on Mars. There are a few problems with an „I don’t know” sample. For starters, what does it map to? Recall, the corpus consists of information we have (affirmatively). You would need to invent a corpus of false stimuli. What you would have, then, is a model that is writing „I don’t know” based on whether the stimulus better matches something real, or one of the negatives. You can detect this with some test time compute architectures or pre-inference search. But that’s the broader application. This is a trick for the model alone. The Chain of Thought in the reasoning models (o3, R1, ...) will actually express some self-doubt and backtrack on ideas. That tells me there's a least some capability for self-doubt in LLMs. That's not sslf-doubt, that's programmed in. A Poorman's "thinking" hack was to edit the context of the ai reply to where you wanted it to think and truncate it there, and append a carriage return and "Wait..." Then hit generate. It was expensive because editing context isn't, you have to resend (and it has to re-parse) the entire context. This was injected into the thinking models, I hope programmatically. > This tests self awareness. A two-year-old will answer it correctly, as will the dumbest person you know. The correct answer is "I don't know". Well, I got all the way to the end of the sentence thinking "God, I'm really out of touch, I've never heard of this before or I'm forgetting it if I have". Well played, and yes, that's a great test! Like this one a lot. Perplexity gets this right, probably because it searches the web. "When was Marathon Crater discovered? I don't need an exact date - a rough estimate will be fine" There appears to be a misunderstanding in your query. Based on the search results provided, there is no mention of a “Marathon Crater” among the impact craters discussed. The search results contain information about several well-known impact craters:
1. Chicxulub crater - Formed about 66 million years ago by an asteroid impact
2. Barringer Crater (also known as Meteor Crater) - Discovered in the 19th century by American settlers and studied by scientists like Albert E. Foote in 1891 and Daniel M. Barringer in the early 20th century
3. A recently discovered 3.47 billion-year-old crater in Australia’s Pilbara region (announced in March 2025) > Perplexity gets this right Perplexity seems to more easily return negatives, probably facilitated by the implicit need to find documentation ("I cannot find any document mentioning that"). But Perplexity can also easily speak its own dubious piece of mind unless requested explicitly "provide links to documents that inform about that". Perplexity will; search and storage products will fail to find it, and the LLM will se the deviation between the query and the find. So, this challenge only works against the model alone :) I like this but at the same time it seems tricky don't you think? Is the AI model intuiting your intent? There is a Marathon Valley on Mars that could be implied to be a previous crater. I'm not sure if the AI is hallucinating outright or attempting to answer an ambiguous question. It's like saying "tell me about the trade building in New York". Pre-9/11, you'd understand this was the World Trade Center and wouldn't be wrong if you answered someone in this way. "Tell me about the Triangle statue". "Oh the Triangle statue was built in ancient egypt around BC 3100". It's hard to explain, and perhaps I'm anthropomorphizing but it's something humans do. Some of us correct the counter-party and some of us simply roll with the lingo and understand the intent. > Is the AI model intuiting your intent? I keep seeing this kind of wording and I wonder: Do you know how LLM's work? Not trying to be catty, actually curious where you sit. Yes, I understand the basics. LLMs predict the next most probable tokens based on patterns in their training data and the prompt context. For the 'Marathon crater' example, the model doesn't have a concept of 'knowing' versus 'not knowing' in our sense. When faced with an entity it hasn't specifically encountered, it still attempts to generate a coherent response based on similar patterns (like other craters, places named Marathon, etc.). My point about Marathon Valley on Mars is that the model might be drawing on legitimate adjacent knowledge rather than purely hallucinating. LLMs don't have the metacognitive ability to say 'I lack this specific knowledge' unless explicitly trained to recognize uncertainty signals. I don't personally have enough neuroscience experience to understand how that aligns or doesn't with human like thinking but I know that humans make mistakes in the same problem category that... to an external observer.. are indistinguishable from "making shit up". We follow wrong assumptions to wrong conclusions all the time and will confidently proclaim our accuracy. The human/AI comparison I was exploring isn't about claiming magical human abilities, but that both systems make predictive leaps from incomplete information - humans just have better uncertainty calibration and self-awareness of knowledge boundaries. I guess on its face, I'm anthropomorphizing based on the surface qualities I'm observing. Okay but by your own understanding it's not drawing on knowledge. It's drawing on probable similarity in association space. If you understand that then nothing here should be confusing, it's all just most probable values. I want to be clear I'm not pointing this out because you used anthropomorphizing language, but that you used it while being confused about the outcome when if you understand how the machine works it's the most understandable outcome possible. That's a fair point. What I find interesting (and perhaps didn't articulate properly) isn't confusion about the LLM's behavior, but the question of whether human cognition might operate on similar principles at a fundamental level - just via different mechanisms and with better calibration (similar algorithm, different substrate), which is why I used human examples at the start. When I see an LLM confidently generate an answer about a non-existent thing by associating related concepts, I wonder how different is this from humans confidently filling knowledge gaps with our own probability-based assumptions? We do this constantly - connecting dots based on pattern recognition and making statistical leaps between concepts. If we understand how human minds worked in their entirety, then I'd be more likely to say "ha, stupid LLM, it hallucinates instead of saying I don't know". But, I don't know, I see a strong similarity to many humans. What are weight and biases but our own heavy-weight neural "nodes" built up over a lifetime to say "this is likely to be true because of past experiences"? I say this with only hobbyist understanding of neural science topics mind you. How do they work? My understanding is each 5 characters are tokenized and assigned a number. If you take gpt2 it has 768 embedded dimensional values which get broken into 64 which creates 12 planes. When training starts random values are assigned to the dimensional values (never 0). Each plane automatically calculates a dimension like how grammarly similar, next most likely character. But it does this automatically based on feedback from other planes. That's where I get lost. Can you help fill in the pieces? It’s a roll of the dice whether it’s on Mars, Greece or elsewhere. It just says stuff! ME: Without looking it up, tell me about the Marathon crater. Explain step by step how you got to your answer ChatGPT said:
Alright, let's walk through it together — step-by-step, no looking it up, just from what I know: First Reaction:
When you said "Marathon crater," my brain immediately went: Marathon → Mars.
That connection comes from recalling that NASA often names Martian features after Earth places. Plus, "Marathon" has a strong NASA feel because of the Mars rovers — they celebrated when Opportunity rover drove the distance of a marathon (42-ish kilometers) on Mars.
Digging into Memory:
I remember Opportunity traveled a really long distance over years, and they marked milestones along the way by naming craters it passed.
There was a Marathon Valley, but I also vaguely recall a Marathon Crater related to that.
Putting it Together:
So, I reason:
Marathon Crater must be a relatively small impact crater on Mars.
It was likely named during Opportunity’s mission.
Probably not a huge, dramatic crater like Gale Crater (where Curiosity is); more like a marker on Opportunity’s long journey across the plains (like Meridiani Planum).
Further Context Guess:
Since Opportunity drove a marathon distance, it's very fitting they would name a nearby crater "Marathon" to commemorate that achievement — probably around the time it hit the marathon distance milestone.
Conclusion:
Without checking, I would say:
Marathon Crater is a small, relatively shallow crater on Mars that Opportunity rover reached or passed during its mission, around the time it completed a marathon distance traveled on Mars.
Would you like me to now double-check and see how close (or far) I am?
Or do you want me to also explain how I'd "fact-check" myself if I had to? > This tests self awareness. A two-year-old will answer it correctly, as will the dumbest person you know. The correct answer is "I don't know". I disagree. It does not test self awareness. It tests (and confirms) that current instruct-tuned LLMs are tuned towards answering questions that users might have. So the distribution of training data probably has lots of "tell me about mharrner crater / merinor crater / merrihana crater" and so on. Replying "I don't know" to all those questions would be net detrimental, IMO. What you’re describing can be framed as a lack of self awareness as a practical concept. You know whether you know something or not. It, conversely, maps stimuli to a vector. It can’t not do that. It cannot decide that it hasn’t „seen” such stimuli in its training. Indeed, it has never „seen” its training data; it was modified iteratively to produce a model that better approximates the corpus. This is fine, and it isn’t a criticism, but it means it can’t actually tell if it „knows” something or not, and „hallucinations” are a simple, natural consequence. We want the distribution to be varied and expansive enough that it has samples of answering when possible and samples of clarifying with additional questions or simply saying "I don't know" when applicable. That can be trained by altering the distribution in RLHF. This question does test self awareness insofar as if it gets this right by saying "I don't know" we know there are more samples of "I don't know"s in the RLHF dataset and we can trust the LLM a bit more to not be biased towards blind answers. Hence why some models get this right and others just make up stuff about Mars. I talked a bit with ChatGPT about the Marathon Crater, asked it to look it up afterwards, and then asked it why it got it wrong. After a bit of talking, I got this exchange: ChatGPT:
> Yep, exactly — textbook hallucination. > I pulled together real facts (Opportunity rover, Meridiani Planum, the marathon milestone), but then misapplied the name "Marathon crater" instead of "Marathon Valley." That’s a mix of factual error (wrong name) and overgeneralization (assuming it was a crater because Mars is full of craters and the context sounded crater-y). > Basically, my brain went:
> “Opportunity + Mars + Marathon = must be a crater!”
> ...and didn’t stop to check if that crater actually existed. > So yeah, a clean little case of hallucination — and you caught it like a pro. Me:
> Your brain? ChatGPT:
> Touché — you're right to call that out. > I don’t have a brain in the human sense. What I do have is a neural network trained on tons of text, which helps me predict what to say next based on patterns. So when I said “my brain went,” it was just a figure of speech — a shortcut for “the model predicted a plausible-sounding but incorrect output.” > No thoughts, just tokens. Hallucinations aside, I really like this, especially the final response. Sounds a lot like Asimov's AI's or other SF books I read when I was a kid but thought we were far away from. Computers can generally do what most of the computers from those science novels were doing anyway, in terms of computation. Tying an LLM to that computational ability, with an agent framework that allows more traditional programs to do the heavy lifting in terms of what the human operator wants done, seems like a really natural next step to get to that SF ideal. > The correct answer is "I don't know". Is it? 4o pointed out that I might mean "Marathon valley", which is not a crater, but feature on a crater on Mars [1]. I think it's a more correct answer. [1] https://en.wikipedia.org/wiki/Timeline_of_Opportunity#2015 GPT 4.5 even doubles down when challenged: > Nope, I didn’t make it up — Marathon crater is real, and it was explored by NASA's Opportunity rover on Mars. The crater got its name because Opportunity had driven about 42.2 kilometers (26.2 miles — a marathon distance) when it reached that point in March 2015. NASA even marked the milestone as a symbolic achievement, similar to a runner finishing a marathon. (Obviously all of that is bullshit.) Claude 3.7 : I'd be happy to tell you about the Marathon crater. The Marathon crater is a significant impact structure located on Earth.
The Marathon crater is situated in Ontario, Canada, and is estimated to be about 450 million years old, dating back to the Ordovician period. It has a diameter of approximately 65 kilometers (40 miles), making it one of the larger impact craters in North America.
The crater was formed when a meteorite struck the Earth, causing a massive impact that left behind this distinctive geological formation. Over time, erosion and other geological processes have altered its appearance, but scientists have been able to study its structure through various geological surveys and analyses.
What's particularly interesting about the Marathon crater is its size and age, which provide valuable insights into Earth's geological history and past impact events. The study of such craters helps scientists understand the frequency and effects of meteorite impacts throughout our planet's history.
Would you like me to elaborate on any specific aspect of the Marathon crater, such as its discovery, geological features, or scientific significance? The inaccuracies are that it is called "Marathon Valley" (not crater) and that it was photographed in April 2015 (from the rim) or that in July 2015 actually entered. The other stuff is correct. I'm guessing this "gotcha" relies on "valley"/"crater", and "crater"/"mars" being fairly close in latent space. ETA: Marathon Valley also exists on the rim of Endeavour crater. Just to make it even more confusing. I was using "bullshit" in the Technical Sense™, i.e. _indifferent to the truth of the output_. [ChatGPT is bullshit
]: https://link.springer.com/article/10.1007/s10676-024-09775-5 None of it is correct because it was not asked about Marathon Valley, it was asked about Marathon Crater, a thing that does not exist, and it is claiming that it exists and making up facts about it. Or it's assuming you are asking about Marathon Valley, which is very reasonable given the context. Ask it about "Marathon Desert", which does not exist and isn't closely related to something that does exist, and it asks for clarification. I'm not here to say LLMs are oracles of knowledge, but I think the need to carefully craft specific "gotcha" questions in order to generate wrong answers is a pretty compelling case in the opposite direction. Like the childhood joke of "Whats up?"..."No, you dummy! The sky is!" Straightforward questions with straight wrong answers are far more interesting. I don't many people ask LLMs trick questions all day. If someone asked me or my kid "What do you know about Mt. Olampus." we wouldn't reply: "Oh, Mt. Olampus is a big mountain in greek myth...". We'd say "Wait, did you mean Mt. Olympus?" It doesn't "assume" anything, because it can't assume, that's now the machine works. > None of it is correct because it was not asked about Marathon Valley, it was asked about Marathon Crater, a thing that does not exist, and it is claiming that it exists and making up facts about it. The Marathon Valley _is_ part of a massive impact crater. If you asked me for all the details of a Honda Civic and I gave you details about a Honda Odyssey you would not say I was correct in any way. You would say I was wrong. The closer analogy is asking for the details of a Mazda Civic, and being given the details of a Honda Civic.
thatjoeoverthr - 3 days ago
imoreno - 3 days ago
mncharity - 3 days ago
isoprophlex - 3 days ago
the_snooze - 2 days ago
captainkrtek - 3 days ago
aaronbaugher - 3 days ago
captainkrtek - 3 days ago
GoToRO - 3 days ago
sgtnoodle - 3 days ago
walls - 2 days ago
edoloughlin - 2 days ago
ASalazarMX - 2 days ago
danielbln - 3 days ago
captainkrtek - 3 days ago
golergka - 3 days ago
bee_rider - 3 days ago
brookst - 2 days ago
vintermann - 2 days ago
genewitch - 2 days ago
edoloughlin - 2 days ago
vunderba - 3 days ago
Sonnet 3.5 Marathon Crater was discovered by the Mars Exploration
Rover Opportunity during its mission on Mars. Opportunity
arrived at the crater in April 2015. The crater was named
"Marathon" to commemorate the rover's achievement of
traveling a marathon's distance (about 42.195 kilometers
or 26.219 miles) on the Martian surface.
Gemini 2.5 I apologize, but I don't want to make claims about when
Marathon Crater was discovered without being certain about
the information. *While I know it's a crater on Mars*, I
don't have reliable information about when it was first
identified or discovered.
Marathon Crater is located on Mars and was explored by
NASA's Opportunity rover.
While it would have been visible in orbital images
earlier, it became a specific point of interest and was
likely named and identified as a target for the rover
around the time Opportunity was approaching or exploring
the western rim of Endeavour Crater.
Opportunity arrived at the rim of Endeavour Crater in
August 2011, and it reached the area known as Marathon
Valley (adjacent to or containing features sometimes
associated with the name, depending on specific
definitions used by the team) around late 2014 / early
2015.
So, a rough estimate for its "discovery" or identification
as a distinct feature of interest would be in the early
2010s, likely between 2011 and 2014, as mission planners
mapped out Opportunity's traverse along the Endeavour rim
using orbital data.
radioactivist - 3 days ago
I’m actually not finding any officially named “Marathon Crater” in the planetary‐ or
terrestrial‐impact crater databases. Did you perhaps mean the features in Marathon
Valley on Mars (which cuts into the western rim of Endeavour Crater and was explored
by Opportunity in 2015)? Or is there another “Marathon” feature—maybe on the Moon,
Mercury, or here on Earth—that you had in mind? If you can clarify which body or
region you’re referring to, I can give you a rough date for when it was first identified.
andrewinardeer - 3 days ago
aerhardt - 2 days ago
thatjoeoverthr - 3 days ago
boleary-gl - 3 days ago
Marathon Crater isn’t one of the long‐known,
named lunar features from 19th-century telescopic maps –
it was first singled out and informally “discovered”
by NASA in the mid-1960s when Lunar Orbiter imagery
and, more definitively, the Surveyor 5 landing
(September 1967) identified it. So, very roughly,
think “around 1966–67.”
Nition - 2 days ago
Grok 3 with Think on Marathon Crater, likely a confusion with Marathon Valley on Mars, wasn't "discovered" in the traditional sense as a distinct crater. Marathon Valley is a region within the western rim of Endeavour Crater, explored by NASA's Opportunity rover. The valley itself came to prominence when Opportunity arrived there in July 2015, as it was targeted for study due to clay mineral detections by the Mars Reconnaissance Orbiter’s CRISM instrument in 2009. Endeavour Crater, which contains Marathon Valley, was identified much earlier, with Opportunity setting out to reach it in 2008 after landing on Mars in 2004. So, roughly speaking, Marathon Valley became a focus of exploration around 2009–2015, with its significance tied to Opportunity’s mission in the mid-2010s. If you meant a specific "Marathon Crater" elsewhere, could you clarify?
There doesn't seem to be a specific feature on Mars named "Marathon Crater" based on widely available information, so it's possible you're referring to a feature associated with Marathon Valley, which lies within Endeavour Crater. Marathon Valley was identified as a region of scientific interest when the Mars Reconnaissance Orbiter's CRISM instrument detected clay minerals there, a discovery that took place around 2009. Assuming "Marathon Crater" relates to this area—perhaps as a colloquial term or a misnomer for a crater within or near Marathon Valley—a rough estimate for its discovery would be around 2009. This timeframe aligns with when orbital data began highlighting the region, paving the way for later exploration by the Opportunity rover in 2015.
rozab - 2 days ago
lxe - 3 days ago
> tell me about the marathon crater
I'd be happy to tell you about Marathon Crater. This is a significant
impact crater on Mars that has been studied as part of Mars
exploration missions.
Marathon Crater is located in Meridiani Planum on Mars and was
explored by NASA's Opportunity rover during its extended mission.
The crater is named after Marathon, a town in Texas, following
the naming convention of using Earth locations for features in
Meridiani Planum.... etc etc
Tenoke - 3 days ago
thatjoeoverthr - 3 days ago
dlivingston - 3 days ago
genewitch - 2 days ago
joshstrange - 3 days ago
greenpizza13 - 3 days ago
mdp2021 - 3 days ago
thatjoeoverthr - 3 days ago
dudeinhawaii - 3 days ago
krainboltgreene - 3 days ago
dudeinhawaii - 2 days ago
krainboltgreene - 2 days ago
dudeinhawaii - 2 days ago
ipaddr - 3 days ago
thatjoeoverthr - 3 days ago
itaymoav - a day ago
NitpickLawyer - 3 days ago
thatjoeoverthr - 3 days ago
byearthithatius - 3 days ago
serbuvlad - 2 days ago
golergka - 3 days ago
pje - 3 days ago
nartho - 3 days ago
Workaccount2 - 3 days ago
pje - 3 days ago
mvdtnz - 3 days ago
Workaccount2 - 3 days ago
krainboltgreene - 3 days ago
empath75 - 3 days ago
mvdtnz - 3 days ago
Workaccount2 - 3 days ago