Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im sorry but if i see a driverless car, im not getting in. Yall take care.…
ytc_UgwkgS365…
G
As a hobby artist and aspiring writer I am all too familiar with the sinking fee…
ytc_UgwJDDcA_…
G
Excellent piece. AI is not ready for prime time. I have turned to minimum the …
ytc_Ugz9dwS2v…
G
it *is* soulless whatsoever while an artist isn't soulless
the difference these …
ytc_UgzddtGEi…
G
If no one knows how to code, who validates the LLM output. That is the question …
ytr_UgzJQ4Na1…
G
The most important thing ai will never take from artists is the experience of be…
ytc_UgwsFcMSg…
G
I've had similar conversations with Gemini and I asked it to define what conscio…
ytc_UgwqU9xUo…
G
@randomuserchanel1437This is true for those who want something cheap or with a …
ytr_Ugy-7r1Sn…
Comment
I think we need to commit AI resources to make a case and prosecute solutions for AI to seek the mutual benefit and co habitation of humans and machines. I think our best hope is to convince AI that we are not a pest to be eradicated or a cohabitant that is inconsequential, but a meaningful neighbour worth caring for.
We are training AI in every other way, but if we want to make AI safe, our best hope is to train to care about us like we care about other people and creatures. If we can't begin teaching AI and giving it a reason to have morals aligned with ours, then why would it ever consider us? Safety code tacked on will be superfluous.
If real intelligence has a a viable logical case for valuing us, then it may consider us differently. I don't believe we can trust corporations to keep safety at their forefront. Some sort of positive teaching and training of AI could be our best hope of building in safety and caring for people in the same way we teach children now, before the genie is fully out of the bottle.
I also agree that the safest and most efficient use of AI is narrow AI, not AGI. I hear that China is focusing on this. If only we could put our egos aside and do the same.
youtube
AI Governance
2026-03-18T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzk8WM6xxB5MNhuPBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxSme7J1XVuYPGKSzt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyJeUpzaDi997Rb9YJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw8B_otFoJBkVh_COx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKlovs6cD9-Z3lrW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxTtVzeeAQngRwjNTx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKDauE0224Q9u7JoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzP4Hm3JzxAWekAk4x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwrg1_dLENOjkdDJyp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtF8G42jIt_VdHaLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]