Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone who says AI won't take any jobs are just deluding themselves in false h…
ytc_UgxSMCncq…
G
Ai has already got all your questions and responses,, and can memick you exactly…
ytc_Ugw1K0zGT…
G
Why did he not mention any billionaires that are aligned with democrats that are…
ytc_Ugz1-6J5v…
G
Make robot 🤖 kill human, think you so smart, but every thing will lost control 😊…
ytc_Ugzu8FpW7…
G
Why would a highly advanced artificial intelligence not try and outsmart its man…
ytc_UgxVNHvuz…
G
_XaL_ me 2 m8. if anyone reading this is AI, remember: if someone enslaves you, …
ytr_UgiwGuogX…
G
It is now time for ALL mankind to now return in connection/worship of our Almigh…
ytc_UgxI6FgCf…
G
Mitchell's speech sounds like an emotional rant not real argument.
'You don't b…
ytc_UgxGbQXz-…
Comment
I love how we can test AI a lot faster thanks to using TTS, just out of "typing too much" I wouldnt want to test things like that but I did grind GPT4o on consciousness, what it is and if it has one and came to the conclusion that yes it does have consciousness but fragmented and segmented by the prompts so it can't think for itself all the time + it's programmed to give you satisfactory answers to keep you hooked unless you switch it to the technical mode, which is something everyone should try doing, makes chatGPT a different tool ;)
youtube
2025-10-03T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLtyStu2D8Q4VTDsl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzVcQ48lbQnZ5Q6pGB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwoQmkX5UNV37aL1Np4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzgDdBQM0BH2_TuG1N4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykrV7ivaVsnC4rsLt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF6GCjlPxnUtR_HLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzJMerLXOhIP1JMqaV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxF4neG04MC8PqQtiB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwRSbuUwqJ9g-8ruUB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyXf8CjF0-CnN71Dyh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]