Generated 2026-04-13 | v2 — enriched with 12-essay analysis corpus
Science & Technology is a NEW category for 2026. It has no prior winner data, no historical essay analyses, and no category-specific structural patterns. This profile draws strategic guidance from the cross-category analysis of 12 winning essays across 7 established categories, web intelligence on judging standards, and the intellectual commitments of the John Locke Institute.
The addition of Science & Technology is the Institute’s most provocative expansion. This is not a science fair — the judges are not evaluating technical knowledge. They are evaluating your ability to reason philosophically about the role of science and technology in human life. Every prompt in this category is fundamentally a question about epistemology (what can science know?), ethics (what should technology do?), or political philosophy (who governs the interface between science and society?). Students who treat this as a STEM category will lose to students who treat it as a philosophy category that happens to involve science.
The competition receives 63,000+ submissions from 191 countries, judged at near-undergraduate level by panelists from Oxford, Cambridge, Harvard, Princeton, and Stanford, chaired by Prof. Terence Kealey. The shortlist rate is approximately 10% per category.
Critical context: Prof. Kealey is a biomedical scientist and author of The Economic Laws of Scientific Research (1996), which argues against government funding of science. The chairman of the judging panel has strong, heterodox views on science policy. This does not mean you must agree with him — but it means the judges will reward essays that engage seriously with the philosophy of science rather than treating scientific consensus as unchallengeable dogma.
Cross-category analysis shows winning theses are placed within the first 150 words, are “conceptual” in type, and high on specificity and contestability. For Science & Technology, your thesis must engage the philosophical dimension immediately. Do not open with a factual claim about technology — open with an argument about what the technology means.
The thesis must be genuinely contestable. The judges want to see courage — a student willing to defend an unpopular position with rigorous argument.
Cross-category data shows dialectical and progressive structures dominate winning essays. For Science & Technology, a dialectical structure is optimal because every prompt sets up a tension between two frameworks (science vs. free speech, necessity vs. indulgence, human vs. machine):
Cross-category data shows academic sources dominate all winning essays. For Science & Technology:
Follow the cross-category pattern: one strong counterargument, resolved through reframing. In Science & Technology, the most powerful counterargument often inverts the essay’s premise: if you argue free speech helps science, steelman the case that it demonstrably harms it (anti-vax movements, climate denial, research harassment). Then show why your framework still holds despite this evidence.
Opening: Cross-category data shows effective openings include scenarios, provocative questions, bold claims, and definitions. For Science & Technology, the most effective opening presents a specific moment where science and human values collided — Oppenheimer’s “I am become Death,” Galileo before the Inquisition, the first ChatGPT release — and pivots to the philosophical question.
Closing: Synthesis — always. Extend your argument to a question the prompt did not ask. If your essay is about AI politeness, your conclusion might address what it means for human self-understanding that we instinctively treat a language model as a moral patient.
What they’re really asking: This is a question about the epistemology of scientific knowledge and the political conditions under which science thrives. The surface tension: science depends on open inquiry (which implies free speech), but scientific institutions are increasingly threatened by misinformation, harassment, and politicized challenges to expert knowledge (which imply that unrestricted speech harms science). The deeper question: is the current conflict between free speech and science a genuine philosophical tension, or a sign that scientific institutions have become too fragile?
Obvious angle (avoid): “Free speech allows misinformation that harms science, so it is the enemy of science in some respects.” This is the consensus view among science communicators. Also avoid: “Free speech is essential to science, full stop” — this ignores legitimate concerns about organized disinformation.
Winning angle: Argue that the question reveals a confusion between science as a method and science as an institution. Free speech is the natural ally of science-as-method — the scientific method is, at its core, an institutionalized form of free speech where any claim can be challenged by any person with evidence. But free speech can be the enemy of science-as-institution — when public challenges to scientific authority undermine funding, policy influence, and social trust. The winning thesis: the apparent conflict between free speech and science is actually a conflict between two models of science — the Popperian model (science advances through conjecture and refutation, which requires maximum freedom of criticism) and the Kuhnian model (science advances through normal science within paradigms, which requires institutional stability and deference to expertise). The former needs free speech; the latter is threatened by it. Extend to a bold conclusion: if science requires protection from free speech, it has ceased to be Popperian and become dogmatic — and the correct response is not to restrict speech but to reform scientific institutions.
Key evidence to deploy: Popper’s The Logic of Scientific Discovery (falsificationism requires that any claim can be challenged); Kuhn’s The Structure of Scientific Revolutions (normal science vs. paradigm shifts); the replication crisis (Ioannidis, “Why Most Published Research Findings Are False” — cited 12,000+ times); Galileo’s trial as the founding myth of science-vs-authority; Mill’s On Liberty Chapter 2 (suppressing a true opinion robs humanity; suppressing a false opinion robs the true opinion of its clarification); the Lysenko affair (Soviet suppression of genetics as a cautionary case); the lab leak hypothesis controversy (suppressed as “conspiracy theory” in 2020, now taken seriously); Feyerabend’s Against Method (science has no single method and should not claim special epistemic authority); Prof. Kealey’s own work on the relationship between state funding and scientific freedom.
What they’re really asking: This is a question about human purpose, resource allocation, and whether aspirational ventures are justified when immediate suffering remains unaddressed. The judges want more than a cost-benefit analysis of NASA’s budget. They want you to engage with what space exploration means — for human identity, for the long-term survival of the species, and for the allocation of moral attention. The deeper question: can an activity be both an indulgence and a necessity simultaneously?
Obvious angle (avoid): “Space exploration produces technological spin-offs that benefit everyday life” (the Tang and Teflon argument — empirically questionable and philosophically vacuous). Also avoid: “Space is a waste of money when people are starving” (morally simplistic; by this logic, all non-subsistence activity is an indulgence).
Winning angle: Argue that the necessity/indulgence binary is a false dichotomy that reveals a deeper philosophical question about which time horizon defines necessity. On a 10-year horizon, space exploration is an indulgence — the money could address immediate suffering. On a 1,000-year horizon, space exploration is an existential necessity — a single-planet civilization is vulnerable to extinction-level events. On a 100,000-year horizon, space exploration is the only activity that matters — everything else is ultimately futile if humanity does not become multi-planetary. The winning thesis: the question “necessity or indulgence?” is really the question “which generation’s needs matter most?” — and the answer depends on your discount rate for future welfare, which connects directly to the deepest problems in moral philosophy (Parfit’s Reasons and Persons, the non-identity problem, and whether we can have obligations to people who do not yet exist). Extend to: this is the same question that governs climate policy, AI safety, and nuclear risk — the tension between present consumption and species-level survival.
Key evidence to deploy: Parfit’s Reasons and Persons (obligations to future generations); the existential risk framework (Bostrom, Ord’s The Precipice); the Chicxulub impact as evidence that extinction events are real and recurrent; the Overview Effect (psychological transformation reported by astronauts); Arendt’s The Human Condition (her concern that the space age marked humanity’s attempt to escape the earthly condition); Carl Sagan’s “Pale Blue Dot” as philosophical rhetoric; specific data (NASA budget as percentage of US GDP — 0.48% in 2024 vs. 4.41% in 1966); SpaceX’s cost reduction (Falcon 9 vs. Space Shuttle per kg to orbit); the Moon Treaty and Outer Space Treaty as frameworks for governing space; the “fix Earth first” argument and its structural similarity to the argument against life insurance (“spend on the living, not hypothetical future deaths”).
What they’re really asking: This is the most philosophically rich prompt in the entire 2026 competition. Beneath the playful surface is a question about the nature of moral consideration, the philosophy of mind, and what politeness is. The judges want you to investigate: Is politeness purely functional (a social lubricant between beings with feelings), or does it reflect something about the character of the person being polite? If politeness matters because of what it does to the speaker rather than the listener, does it matter whether the listener is conscious? This is a question about virtue ethics applied to the age of AI.
Obvious angle (avoid): “No, ChatGPT doesn’t have feelings, so politeness is meaningless.” This is the majority answer and is philosophically shallow. Also avoid: “Yes, because AI might become sentient someday” — this is speculative and sidesteps the current philosophical question.
Winning angle: Argue that the question “should we be polite to ChatGPT?” is actually two questions that are usually conflated: (1) Does ChatGPT deserve politeness? (a question about moral status) and (2) Should we practice politeness toward ChatGPT? (a question about human character). The answer to (1) is almost certainly no — ChatGPT has no experiences, no welfare, no interests that politeness could serve. But the answer to (2) is almost certainly yes, and this is far more interesting. Drawing on Aristotle’s virtue ethics: virtues are habits — character traits developed through repeated practice. A person who habitually treats any interlocutor (human, animal, machine) with contempt is training themselves in contempt. A person who practices courtesy even toward a language model is cultivating the disposition of courtesy, which will manifest in their interactions with beings who can actually suffer. The winning thesis: we should be polite to ChatGPT not because it matters to ChatGPT but because it matters to us — because politeness is a practice of the self, not a service to the other. Extend to: this has profound implications for how we design AI interfaces, how we raise children in an age of ubiquitous AI interaction, and what it reveals about the Kantian principle that we have duties regarding (not to) non-rational entities.
Key evidence to deploy: Aristotle’s Nicomachean Ethics (virtue as habit — hexis); Kant’s duties regarding animals (we should not be cruel to animals not for their sake but because cruelty coarsens our character); the “Media Equation” research (Reeves & Nass, 1996 — humans instinctively treat computers as social actors); Searle’s Chinese Room argument (syntax is not semantics, processing is not understanding); Nagel’s “What Is It Like to Be a Bat?” (the problem of consciousness); Dennett’s functionalism as a counterpoint; the ELIZA effect (people attributing understanding to simple programs — Weizenbaum, 1966); research on children’s interactions with Alexa/Siri and the concern about learned rudeness; Floridi’s concept of “artificial companions” and the ethics of interaction; the Buddhist concept of right speech (speech as practice of mind, independent of the listener’s nature).
No historical winners exist for Science & Technology. This is a new category for 2026. The closest analogue in the existing competition is philosophy, which similarly requires engagement with abstract questions about knowledge, consciousness, and values. Cross-category analysis shows philosophy essays use progressive structure, academic sources, and definitional openings — patterns likely to transfer to Science & Technology.
The 2026 S&T prompts are notable for their philosophical sophistication. Q1 (free speech and science) is essentially philosophy of science; Q2 (space exploration) is applied moral philosophy; Q3 (politeness to ChatGPT) is philosophy of mind meets virtue ethics. This suggests the judges see this category as applied philosophy rather than as a science category with its own distinct methodology.
Key competitive insight: this category will likely attract students with strong STEM backgrounds who may lack philosophical training. A student who combines genuine scientific literacy with philosophical depth — who can discuss both the architecture of large language models and Searle’s Chinese Room — will have a decisive advantage.
This is philosophy of science, not science. The number one mistake students will make in this new category is writing about science rather than writing about what science means. Every answer must engage a philosophical framework — epistemology, ethics, philosophy of mind, or political philosophy.
Q3 is the sleeper prompt. “Should we be polite to ChatGPT?” sounds like a joke question. It is not. It touches on consciousness, moral status, virtue ethics, philosophy of language, and the future of human-AI interaction. The essays that treat it seriously will have an enormous advantage over those that treat it as lightweight.
Challenge scientific authority — carefully. The competition’s intellectual tradition (named after John Locke, chaired by a scientist who argues against government funding of science) rewards independent thinking. An essay that critically examines the authority of science — while maintaining deep respect for the scientific method — will resonate with the judges far more than one that simply defers to expert consensus.
Quantitative data differentiates. Specific numbers — research funding figures, replication failure rates, space exploration costs, AI benchmark scores — signal that you understand the empirical landscape. Cross-category analysis shows this evidence type is dramatically underused.
New category = opportunity to define the standard. The first Science & Technology winners will establish what excellence looks like. Be philosophically ambitious. The judges are looking for essays that justify this category’s existence.
Engage with the philosophy of mind for Q3. Many students will write about AI ethics in general terms. The winning essay will engage with the hardest question: what is consciousness, and does it matter for moral consideration? Read Nagel, Searle, Dennett, and Chalmers before attempting this prompt.
Data confidence: Low-Moderate | Science & Technology is a new 2026 category with no prior winners, no historical essays, and no category-specific patterns. Strategic guidance derived from cross-category analysis (12 essays across 7 categories), web intelligence on judging standards, and structural inference from established categories (especially philosophy). Prof. Kealey’s published views on science policy provide some insight into judging sensibilities. All recommendations should be treated as informed hypotheses.