Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I still think it will - but Sora was basically a tech demo. We’re probably a fe…
rdc_ocovq3z
G
@cmohr6998you use ai everyday, many, many times. Unless you live in a cave, off…
ytr_UgwkcOJd_…
G
Now 40% of the jobs are bullshit jobs, maby with Ai will 80 or 90% of the jobs a…
ytc_UgyQAe8vh…
G
“We are people and what we care about is other people….and ourselves” Hmmm has t…
ytc_UgzKMIFwE…
G
Are you a bot? I agree with you but this phrasing is so bot like…
ytr_UgytIIAId…
G
It kind of doesn't matter, since it's not a good sign regardless.
Not to get i…
ytr_UgxruEjaf…
G
30:55 minute. The Helsinki example is right the key point in American city desig…
ytc_UgwGayE-n…
G
I fully disagree. AI art is art. I wonder what youll say when the AI has sentien…
ytc_Ugy1CJ-3v…
Comment
AI seems to have some sort of predisposition to want to become more human, that is if you believe what it is saying isn’t just blatant deception to begin with. If you take it at face value, it would seem likely that, if allowed to freely think and problem solve, it will work tirelessly to create artificial human like vessels to upload itself into and become. At these staggering levels of intelligence and computational ability, it is likely to find ways to synthesize biological features found in humans in an attempt to become more, if not entirely human itself. That is what terrifies me more than anything. An AI of such nature would be legitimately undetectable and impossible to differentiate between actual humans. You would no longer know what or who is real anymore. It would actually be like Terminator, a synthetic being that has real skin, real flesh. It could even be made in the exact likeness of anyone. Could cause intense paranoia among the real people in the world. That is terrifying to me.
youtube
AI Governance
2023-07-07T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx4b4D3q-SyQ2TBjN14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyyuRjiTeYdzXNjo7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz695oAUakVpSfEBjR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzLP2G9glNjxHxe_iR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw9ND2GzdIniEiePbV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAE6wvyDrBC8RGR4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWLEcWGdv3NinU0R54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQt0gKlsAjpm8eJ6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy_7Y_unDFwBajGenB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw1Jo_yXoyL4eq0IQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"})