Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What about the concept of humans using AI to kill other humans? And it doesn't h…
ytc_Ugw2D7nZo…
G
I am impressed by her (=the robot). I like her talking animations. Her lip anima…
ytc_UgjJxkMDZ…
G
Wow never watched your content, but I've been familiar with you for probably ove…
ytc_Ugy7RB6XZ…
G
This is a spiritual battle remember what father God In Heaven Christ Holy Spirit…
ytc_Ugz-Qxk2_…
G
I wanna begin to say that I’m 40 years old. I grew up where I was able to see a …
ytc_Ugzubw6NS…
G
We understand that interacting with advanced AI can sometimes feel a bit intimid…
ytr_Ugwq841tX…
G
It is beneficial to separate ideas of what is from what should be. I’d agree wi…
rdc_gqsx5ta
G
These “AI artists” doing free work via ‘training’. Ha! They can train a machin…
ytc_UgyMK_YxR…
Comment
Thank you for platforming Geoffrey Hinton sooooo much !!! His message MUST be heard, by the masses, and the decision-makers, as some of those risks are already very present, from algorytmic optimization to human obsolescence: those are not fantasized "what if" scenarios, those are very real consequences of the unchecked development and democratization of AI usage in all aspects of society.
PAUSE AI are doing great work in raising awareness and lobbying toward a slowing down on the rolling out of AI so further research can be done to limit existencial risks.
youtube
AI Governance
2025-06-17T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUP118DsDMQIKk0VV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzF_kEPxi2jQJBD7Ql4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVPCC7TA6p9OaGa6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGuheFI7Ld0QvIUg54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1pDb1aHGQywwgYSV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhS-KVJUFxpQ-xX554AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy96S9jU9VDUuhPMgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzpYTAD57y547QxFnt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOGYnQ4abRC7q5ru14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz5qJuYEOWyh9PyTOt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]