Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I call it, Gary. It is waaay past the singularity. Imo, that was years ago. Reme…
ytc_UgxYT-1O8…
G
Hello Charlie, ChatGPT opens an entirely new universe for me. Thank you for you…
ytc_UgxqIKz9v…
G
*P.S.A.:* _If you don't have/want to pay for a ChatGPT subscription, sign up for…
ytc_UgzWb_bI1…
G
Sadhguru really means Ai conquer and control humans because it's more intelligen…
ytc_UgxOsnY_m…
G
Is there a point where AI can trick people into entering a matrix and the person…
ytc_UgxgKKXfO…
G
AI will either just give us the 2008 stock market crash again, or skynet.
The c…
ytc_UgzmnPLy-…
G
@nimrodery If you draw a tree you are copying. Even if you add changes? Same arg…
ytr_Ugw_4fMhi…
G
I agree, AI does compete with the users for its own gains. AI is learning from p…
ytc_Ugxe3uvBn…
Comment
If AI becomes smarter than humans, isn't it possible that it could decide that what's best for the world is cooperation and peace among nations. Couldn't it determine that wars and weapons are contrary to those goals and work to eliminate them. Couldn't it teach and foster humankind to desire, above all else, cooperation and peace with each other and among nations. Wealth invested in weaponry and war could be diverted toward more beneficial ends. Obviously, humans are underdeveloped and judging from history is, overall, not very skillful in running this ship called earth. On the other hand, if it decides we're hopeless or even useless, I think we're done.
youtube
AI Governance
2025-06-22T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKoLv-PzAm-LhV8ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxp_U1q07iztPHcr6p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWH4ietbUL3-tPdr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_HSTyv6MB8755cot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-NJ61zfFBcEpRWhV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCDEWaCDp0nwGnJHd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxzl-hiOiJlUG7zbk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzkNhlU9uhlBJ95xd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyngWy6jd1UnwCXstx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzuPrOorFSI5DwYgRZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]