Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has only a purpose, but never a meaning.
It tries everything to give it one,…
ytc_UgzlqQQMs…
G
The truth is that Chat GPT's primary goal as a Large Language Model Algorithm is…
ytc_UgyudGQrh…
G
@MASKEDB”we’re not like the others, that aren’t getting paid” right, because no …
ytr_Ugz9e9lPO…
G
The lovers of generative AI don't understand that being creative for most people…
ytc_Ugwt9dWfe…
G
13:21 I think they mean accessible sometimes as in..for people who can't draw. T…
ytc_UgznzWmF8…
G
I'm here because Task Us is hiring for AI account, and I've been interested in A…
ytc_UgxZJzLMR…
G
see, ai never argues with anyone. but with me, i told them something that they h…
ytc_UgxO_Viru…
G
Keep in mind what Klown Schwab (then of the WEF) said this about the technology …
ytc_Ugxb7vBAK…
Comment
Anyone that believes ai is dangerous doesn’t believe in unalienable truths. Anyone that believes in truths should have no fear of ai being dangerous. By the nature of humanity, we all advance towards the good, even if we take one step back, we seem to always move forward. I don’t believe human nature leads to our total collapse. The same goes for ai, which is based on our nature, but given the wisdom of our entire corpus of knowledge.
Have no fear, unless of course you’re a doomer and have no faith.
A computer is no different than a human embryo and the ai running on that computer can be flavoured anyway we humans want, this includes human ai’s wants as well.
Projecting out, the ai will extract the evil ai and keep the good ones based on unalienable truths with human nature
youtube
AI Governance
2025-01-16T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]