Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When one finds out today's AI, wrote presidential speeches 30 years go... "Tapes…
ytc_Ugx3O7OQ_…
G
Can't wait for Asmon to react to this, take the discussion out of context, and v…
ytc_UgwLPJC_w…
G
Mathematicians are not calculators and they never was , but programmers write co…
ytc_Ugx72SkZj…
G
chatgpt is both one of the most amazing and most infuriating technilogical tools…
ytc_UgwPu77ZO…
G
AI is so smart, but most of them don’t know what day it is. ask one…
ytc_Ugwzxr3Xu…
G
ChatGPTs answer of 2025: Given current research trends, funding, and technical c…
ytc_Ugx3ONJdo…
G
am i the only one who is more interested in the AI behind the character they're …
ytc_UgwG5OnrO…
G
Chatgpt and other llms don't reason. They extrapolate, they use stochastic metho…
ytc_UgwezBEJ2…
Comment
Well, let's go for the worst case scenario, where AI takes over the world...then what? It may be the strongest, smartest thing on the planet, to do what? Once it will take over and perhaps eliminate all humans, what will it keep it going? What goals will it have? Or it will shut itself down, after some time, due to no goals found:)
Humans have desires, goals, pleasures, all these, made us work better, learn more, live longer, etc
Without fear, without greed, without desires, how and why should a entity want to keep on going? Feelings cannot be learned.
The only option remaining would be a hybrid human, I guess.
youtube
AI Governance
2025-10-01T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyioraT0Gum_cSgpPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy5uvBHGd20PdH6oVR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzC8rnovFtW3jjZnmd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMgB_AKhK5Yeq6tB14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzi1ppnwe6X4yEjptd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPTzws_pX5vlYdP1x4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwZjQlV6f_S0-ZSGxt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjkpNcCwPbD3OjOcx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9EhOyfeSSJ7ERWIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugymqe4tYCyCSGfK-w94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]