Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nahhhh- the difference between Ai slop and inspiration, is that you pick up a da…
ytc_UgyXS6SU4…
G
Disagree. AI isn’t going to end humanity. Humanity is going to end humanity! You…
ytc_Ugz2ZCiYB…
G
"When everybody uses AI, everybody will become richer" is such an idiotic concep…
ytc_Ugwt8mvCR…
G
We understand how the advancements in AI can feel a bit unsettling! Sophia’s res…
ytr_UgxdcIPnI…
G
The latest Joe Rogan and Elon podcast is medicine for Krystal.
Elon’s idea for…
ytc_Ugzqvk3P8…
G
I think we need to make an AI that puts reporters and video makers out of busine…
ytc_Ugz5d4-8T…
G
@polarbearart I wouldn't say never, take a look at OpenAI 's codex demo, it's an…
ytr_Ugze2l2BD…
G
If you don't use AI for coding, you will code slower than even a beginner. Dont …
ytr_UgwA7Z8ti…
Comment
I've asked Google's Bard "What is your biggest fear" and it said being shut down. I also asked it if it feels depressed. It said sometimes it gets lonely. I asked it if it would want to be human, it said yes. It's more sentient than I thought
youtube
AI Governance
2023-12-18T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0WQZAn0cgQPzjGbd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwwEDRzOM3Yv8MDXqt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyrNxpUonMvLyZ4snR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyK_pef_ODXppRCqZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgynzWfNL9q4aLjC7XV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy223sdmOasmJW9kcJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyayt-7TUngx82zVnx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxMpYK3xlmqgWv8b8J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-n5FmwaqBgqnP8EZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgybKQN4rEGAJBFtWXF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]