Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There are VERY few things I think AI should be allowed to do:
1. Medical shit l…
ytc_Ugzk7qmXZ…
G
How about we not make AI too advanced in the first place so that we don't have w…
ytc_UgxdCiXaI…
G
You just know that super intelligence means to these people doing the things I d…
ytc_UgzRSNWIb…
G
AI will allow the ruler class to be less dependent on the common folk. Eventual…
ytc_UgwjBXv0C…
G
Any time I watch a video like this, I send it to all my servers and people I kno…
ytc_Ugw559k8b…
G
Everyone concerned about a super A.I that is going to turn into HAL or Skynet is…
ytc_UgyOVgE6W…
G
its so gross that the ai you "interview" refers to humans as "us" and "we," but …
ytc_UgxdEMwU3…
G
He doesn't actually know much about AI, I bet. I literally work in an AI company…
rdc_m293zbq
Comment
I am basically a hermit. I would never use AI for having a conversation/s. Ever! Hell if given a choice between any AI chat and calling/paying a psychic, I’d call the psychic. And I believe ZERO of that nonsense!
How could any of the AI chats be remotely taken seriously? You are communicating to something that can NOT relate to the human condition. It has info but no real life experiences, not to mention any emotions! You might as well be talking to the wall as far as feeling any emotional connection. I don’t get it. If anyone out there does “get it”, could you maybe explain it to me. I’m not being judgmental, I don’t think? I certainly could be wrong though.
youtube
AI Governance
2023-05-15T19:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwvkCt_D4712LRMilN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugza9r98NYnuXpVoyUV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxLE6mvc-QKNv5BLCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxWFtogKu_HnoybfV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmxDx1pKGT_IcKNr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFj1mfjlF2LJyX3_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHiWyz8Uoo74u2oOd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQ4UUdJNNzzTLVeYd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxk7X3MQZ1rldGcr5h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrYRPn9idXIX9DbPN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]