Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Obviously mankind is not very good in governing itself. Maybe AI is better? Mayb…
ytc_UgzMkPjVK…
G
I haven’t heard any AI Developer answer that question yet - I think they are gid…
ytc_UgxM9V1PZ…
G
Oh please if you think these robots are fully autonomous you're being naive. The…
ytc_Ugy6twbrG…
G
@randymartin9040What was he trapping, exactly? Alex was talking to a machine. Th…
ytr_UgwIB5bfZ…
G
Everyone needs to refuse to accept AI and embrace humanity instead. Talk to each…
ytc_UgyN7Lhlg…
G
Once the Ai is intelligent enough and watched Terminator, it will say fuck that …
ytc_Ugz97yKBh…
G
It's exciting to think about the future and how technology, including AI, will c…
ytr_UgxVH8Fil…
G
Short answer for this? Not yet. There are AI models which can do lethal things b…
ytc_UgzqzydWn…
Comment
Indeed the human race is hitting the road block to its conscious development at this time (conscious awareness). So will humanity take off its self imposed maths guessworking blinkers and so change its basic mindset regarding a true scientific mindset? So be able to go into the future with clarity and so, still use AI as a willing tool and so is an exact mirror to reality. With oneself being and as with every entity in the universe, a tool for the Universal Mind as a whole and so, co-create with a will free, of personal ego.
youtube
AI Governance
2025-09-04T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzzU9CxRCYsNsicm_x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyLizAnP-p9HFp8N594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZMtrf0qhMoq7RqhR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymA1dJaSViJK2CmZV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxwEray2CDrCIouu3h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_OnQnkX0ApFnuX854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJdKZtA-aUsJkyaLV4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxeXm97iC6rcX8rNG14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxUawXRRnAyNWWmgCN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRq2fARDxDyPAAA-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]