Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Before we prove that AI is conscious we need to find out what percentage of huma…
ytc_UgzPuhn74…
G
What most of these videos about the coming AI and job loss do not talk enough ab…
ytc_UgzCp7uke…
G
I hate the use of the term "plagiarism machine". Shows a complete lack of unders…
rdc_jj3qytz
G
100% agree. Even telling me that something has "AI" crammed into it at this poi…
ytc_Ugxz8skbu…
G
A friend of mine had a very similar experience to Emu at FanX Salt Lake this yea…
ytc_UgwCJUpEI…
G
3 year old me's drawing of my brother will always be better than the most breath…
ytc_UgypwYhZ4…
G
Llms are a model of the human brain without the biological peripheral processes.…
ytc_UgxLO-djy…
G
All he had to do was show them the ai messages to his friends so they know it no…
ytc_Ugz2SGead…
Comment
Yes, but AI can only use information that already exists, it cannot create anything new or original on its own, its literally like driving the fastest car in the universe but all it can do is exactly what a car does except its really fast.
This was very disappointing, I was hoping he might go into any length of detail how AI might destroy humanity, but once again, it's nothing but vague predictions that could be taken from any sci fi tv show from the 90s.
youtube
AI Governance
2025-06-24T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwyKipZaEVQtHGLFIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4lRo7XL3Qm18j3u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"disappointment"},
{"id":"ytc_UgwnXAQXgwvA1BNXrIh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz_MizgZw3TQFr1OdV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQdrvsjc6UZZ8cE5d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwojUvFYLBN0iFfHIR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtpkOAeRKMPd3d1-14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxCGphu_dtcF9dtC4x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugya_hZk_VXVBvwtv7B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8XDbOVgiHzN8X4fx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]