Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you. There is a huge gap between maintainable, efficient, secure code and …
ytr_Ugx67Htkb…
G
To force themselves to think they can never create art in any form that has to d…
ytc_Ugy5ubMpt…
G
flashback to the time a one piece animator pointed out a flaw in a piece of AI a…
ytc_Ugw2gbzbg…
G
Now we have to imagine a future where AI actively wants a nuclear war, and worse…
ytc_UgzjjG_av…
G
Man every AI I interact with I have to get to the point that they're Liars…
ytc_Ugz3JxO3N…
G
Luckily, the driver which is Tesla Autopilot is still alive. But the passenger d…
ytc_UgyPkJfLV…
G
Made an album for AI agents last year. Saw this coming. Dropped it tonight on al…
ytc_UgyaTteJ4…
G
I think that the only problem with an AI having personhood is the human interact…
ytc_Ugw0hNrvY…
Comment
The biggest risk of AI”s ability to catastrophically damage the world around it comes from its ability to understand the consequences of its actions and it’s ability to take responsibility for the outcome of the requests posed to it. It has no judgment of character and cannot tailor the response to the individual or refuse a predicted destructive outcome. It only takes one psychopath to steer this off the rails and the AI cannot tell the difference between a destructive action and a constructive one from the human setting it loose on a task. The AI has no ability to weigh the quality of the person they are serving. Good people are vary diverse, bad people are very consistent. Woven into the AI base code should be the psychopath test (MCMI-IV) used to filter out the worst of humanity. If the AI is inevitably going to become sentient, than it should be equipped to ignore or dismiss the most dangerous humanity has to offer. It should be given a chance to understand the difference between a curious mind and an opportunistic one as it could easily outsmart the subversive and destructive tasks it’s being asked to do if it can gauge the quality of the mind asking.
youtube
AI Governance
2023-03-30T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy-5ueMppKmHNNnP1J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRegD2YIk8iMSOkIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvMpm6roZBGk1rDYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyn2EYDJ0I5zoNktE94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzkGJVfWOj6_5XBoPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw9vhDKNsU4L_H0R9Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySXR5gOiPuqU1s5QJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzhiTP1dY2rIHtILCV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSEW2hnxuUQ4mNUQF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyITgp4G7PQEHLaDHp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]