Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI doesn't understand meaning. There's no meaning in what an AI create.
So no, …
ytc_UgxxI80qI…
G
What if you could watch ai versions of your favorite players and teams uploaded …
ytr_Ugx51NbI_…
G
I bet the richest living in Dubai had something to with all this. They have noth…
ytc_UgzQFbi4z…
G
I spoke to chatgpt constantly for over a year. I only recently realized, it does…
ytc_UgzVveglU…
G
THIS DUDE SAYS "I won" AGAINST CHATGPT HAHAHAAHAHAHAHHHHHHHH twin you taking thi…
ytc_Ugz-tKvIJ…
G
Learn how to become a good pet and hope for an AI agent that will treat you like…
ytc_UgzZyl_IS…
G
Short-term vs Long-term context...LoL...focus on self-contained vs interconnecte…
ytc_UgxQEuTdQ…
G
Most families get when the end is near. However some cultures will only let a h…
ytc_Ugzbag5xW…
Comment
I'd have to disagree that A.I can become "more intelligent" than humans. The dangers humans pose to the world is based on our free will, and A.I can never develop that capability. A.I, however, can eventually be used as a weapon by humans, which would be dangerous and require regulation, but I think the threat of A.I becoming more intelligent than humans is not something to be worried about. It's new technology, just like nuclear technology in the 40s.
youtube
AI Governance
2023-04-18T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwSny2nwOytPJ3xzeN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx1Ks00i7FYDbgJa3p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzx8eqtCVnEjthUgzR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW4tZ5etoaWOWvSFB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGilWeGJKqr-7avV94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxftmeJInEgU1qVGO54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyK2jRBOVexN3ZUgEp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgypWv5DIn2D0Z22ddp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz9FTaiIyI-8ItJcXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRw0bDMB-XPIVeeCZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]