Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even if an AI were to develop conscious reference frames, it wouldn’t fear termi…
ytc_UgzQILnIg…
G
These Ai agents are capable of thinking 🤔 evil 😈 thoughts and execution into act…
ytc_UgyjhSc-1…
G
Stop developing AI. Stop all its development before the entire world is destroye…
ytc_UgzjFZyBT…
G
Seeing the number of 4 digit suffix's on almost every user ID, id say ai just wa…
ytc_Ugx-dOVY2…
G
@The_Almighty_Piece_Of_Bread my rage bait was a success also u messed up a part,…
ytr_Ugzo6Qxv1…
G
WHOSE TO SAY THIS ROBOT ALREADY**** knows it should of been giving a face. It s…
ytc_Ugx3CzsFr…
G
imagine this robot, put his gun in you head and say " im alive !" god save us…
ytc_UgzQZaTHV…
G
The fear discussed in the video regarding massive job destruction by Artificial …
ytc_UgwQJ9uHh…
Comment
Excellent interview - I like that, the interviewee didn’t over complicate his responses , otherwise it’s easy to switch off. AI - Not great long term , but we all knew this years ago , so now why the alarm/frustration knowing fully well when creating AI the work force will be reduced & people will be f-cked . As he said all well & good to regulate / policies but hard to police + other countries will be doing as they please . Scams, loss of jobs , political fraud, deep fake the list goes . The cons far out way the pros , so not impressed at all .
A monster has been created & no going back 😵💫.
youtube
AI Governance
2025-07-16T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugxe3i3-I84L1v6WkTt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_UgyNe47VIeo0z32NiuJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw-5BCPmTL8DWyYNYp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz8He1xAsHAo7lJgxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyK13VTVE6tRJwxfJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyE5wX3cFfw6_X6IL94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},{"id":"ytc_UgyZlF8dluKheFFNkeh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgyeJLaFMWyhNas_XaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},{"id":"ytc_Ugzz8WYjxBOPN_6YBIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxWXmn27NBJ1sAWi5p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}]