Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And what exactly is the regular person is that the person that doesn’t have any …
ytc_UgzTxAt3Z…
G
not betting against AI, but AI isn't that great right now. Once I used AI create…
ytc_Ugz2M0a-v…
G
LOL, sorry but you have NO IDEA what AI is. LLMs do not care about "if or else"…
ytr_UgxgkWBxZ…
G
It's infuriating to listen to people talk out of one side of their mouth about t…
ytc_UgxtuX9KN…
G
Hot take: It’s not ChatGPT that makes devs worse, it’s how people use it. I fini…
ytc_UgznycHdF…
G
In a distant future, humanity has harnessed the power of quantum computing and n…
ytc_UgwGBOXkM…
G
The economics run on profit and loss. AI is giving huge profits to big companies…
ytr_UgybADskM…
G
I from my birth to death forever have been against artificial intelligence and I…
ytc_UgxrHMhun…
Comment
Take the 'invention' of nuclear fusion, discovered in the 1920's. If the driving force behind it's use is 'money and power' the result is a nuclear weapon, however if the driving force was the 'betterment of humanity' the result could be free power for everyone. Which way was that technology used...weapons and power. Same with AI, will it be used for good or bad......who decides what's good or bad though, and what is meant by good or bad ? Good for humanity/bad for profit or good for profit/bad for humanity. Unless we infuse the system with and for the WELLNESS AND BETTERMENT OF HUMANITY it will fail and these huge data centres will become the dead pyramids of the future.
youtube
AI Governance
2025-09-05T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwBHoCc-ZmAi0WrlYt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw45vUD8bnPPqzsB-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhRqlr7rkzkKtcWuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJqQu5OT0phwQc-_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwc3lrC63EynRm_G814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwl7JPJUxMN8QMLUR94AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyh5e_MBiSQrIADwzF4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyCEsxJjwL1FVR_yEJ4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzvMKAl8NAyTiQvw954AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxs83NwUbXaukgZN-14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]