Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is one area (along with lawyers) where I think the public as a whole can be…
rdc_jkpwgkp
G
AIs can't think. Once you understand this you realise a lot of this speculation …
ytc_UgzrlCAAR…
G
We live in a world where Elon Musk in our Tony Stark, our Ultron will be ChatGPT…
ytc_Ugy_Daysg…
G
@thedevilsadvocate788 Jesus fella. Getting bent out of shape cuz a guy doesn't l…
ytr_Ugxwfaoee…
G
Thank you for not using a 'what if' statement in this. When I engaged with Chat…
ytc_UgxzT75pr…
G
Generative AI isn't even real sentient AI, it's a literally unthinking language …
ytc_UgzcpI8GF…
G
here's a reminder for you all: AI isn't an ''intelligence'', it's a bunch data w…
ytc_UgxoFQHkq…
G
It is hard to believe that any AI can be more dangerous and atrocious than human…
ytc_UgyCATN09…
Comment
What a nonsense. There is no danger.
What's the plan here? To lure Chat GPT into role playing an evil persona - and then to magically hook it up to, say, nukes and have the evil persona fire them?
News flash: there's a shorter way. Just fire the nukes.
Any REAL danger comes from the real physical entities that the AI has access to, not from the AI itself.
And in terms of destroying ourselves we seem not to need a roleplaying AI's help, at all...
youtube
AI Moral Status
2023-03-07T05:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy_errvOlhn1w8TuLB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIaR92gc_VsGGGdlh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyzJcV0uSShqVwzjRF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXxO9lgVJZ1x8IyY54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5tJb2_-wNc5rp_6F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx7zeztct6slbarWLp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLYZb7R8AnlZtScDN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEftL-ZYThFMvjGdN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugypv2Rp1lWgytGiJgJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZsDbWf61LwA75sLJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}
]