Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's the thing. It will take you 6 months to do what an AI did in a few hours …
ytr_Ugx4Iy6Gs…
G
2:30 "Stop being lazy". Oh, the irony.
Also "Work harder to make your work more …
ytc_Ugw4_uEGx…
G
Ahora entiendo por qué el cielo es un gobierno con un solo rey un solo Dios sobr…
ytc_UgzDbp-zv…
G
They are killing themselves to escape the horrors and inhumanity of intense capi…
rdc_gsopy3v
G
Tons of brilliant AI hackers out there and the so called built in safeguards can…
ytc_Ugz32eXsg…
G
I appreciate your analogy of using a car to win a footrace and calling yourself …
ytc_UgxY8f8hs…
G
I just paused this video to go on an hour long side venture because i had no ide…
ytc_UgzLZZFpM…
G
When comparing the prices, I think there should be a third category - art commis…
ytc_UgxVsdD8v…
Comment
Fascinating discussion with Neil deGrasse Tyson on the nuances of AI and its impact on humanity! It's refreshing to hear a balanced view, especially at 15:47, where Neil explains why he isn't worried about AGI's impact. I'm curious, given the exponential growth of technology, how do you both envision the role of ethical considerations in AI development? Would love to hear more on this in future episodes!
youtube
AI Moral Status
2025-08-23T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyDJaIo057RMWQXADZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoFYvqrkRkcx0MAVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxkMtg4QK-0QfNLMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOp1o9FtdXN0j7jMN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwuwC3P9FSTALHJZ3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwNODDMh05sH0NdE8p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJ9XBWpNx_-HuivPd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH0UTBHmPTEsp_cyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwFMw7p1LwsrVyYomx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwMNf21re86QQhr5BF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]