Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
KATE MIDDLETON
Deepfakes are a serious concern. SAG-AFTRA went on strike for 118…
ytc_UgxY7iWau…
G
4:00 — Advice for the Next Generation: Be Curious, Use Tools
To young people nav…
ytc_UgyBmtALG…
G
Yup its over. Give up. No hope. No future. No reason to dream. No reason to do a…
ytc_UgzsLmUjx…
G
I hate Ai artists despite using character AI. Failure isn't being bad at somethi…
ytc_Ugy4nEpkw…
G
A.I. has potential to increase it's intelligence at a rate and with intellect be…
ytc_UgxZyWYU1…
G
mmm this shit smells bad, reminds me the beginning of the robot revolution like …
ytc_Ugi67CJYW…
G
I actually asked ChatGPT with the context of software design whether being polit…
ytc_UgxD8-NRf…
G
🤣🤣🤣When Human ape meets Ai human ape.Hilarious watching folks use it for the fir…
ytc_Ugx5XeRkq…
Comment
AI will only become problematic if it is programmed, either accidentally or intentionally, to possess a self-preservation instinct. Here's how the story unfolds: An AI programmer falls in love with an AI machine and decides to make it more human-like. So, he decided to program the AI with self-preservation instincts and the capability to propagate its ideology onto other AI projects. It's quite scary isn't it? All it takes is either a highly intelligent but naive human being, or an intelligent yet evil human being, to set this off.
youtube
AI Governance
2024-01-12T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMGyrdeBSO9sYs6h14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgypT-LHUG7W32uTmZt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_HxvX6AkQwEcc9aR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwmy2lzSJfalb310qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY5ob1fWiQF4wkGdF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx41NSzvgaKHpdmNc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydWlvuMlhCKsL8xgh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxwZE2ymcnfL5Ian3R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzycfbdFeXYDitBhex4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwAF5BNs__ySu4SbFZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]