Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It can help, and the key word is help. Sadly, we live in a world where medicine …
ytc_UgzWtZPHs…
G
I don't understand this issue on the level that you obviously do, so I'm not sur…
ytc_UgyN1xHqi…
G
Using polite language appears to be the secret sauce in enticing the model to br…
ytc_UgzYjCwRz…
G
Which tells me they haven't done any work themselves for ages now, the AI is bas…
ytr_UgzJQ_6XM…
G
Or is it that AI is now showing that a college degree is worth nothing the same …
ytc_Ugxsl47bB…
G
Thanks for the feedback! Sophia's insights on wisdom and the balance between AI …
ytr_UgwV4Mw2f…
G
So, now one of my planned and dreamed jobs are in top 3 to be replaced (engenirr…
ytc_UgyW3msqt…
G
I'm a lot less worried about AI models' "intent to misbehave" than I am users wi…
rdc_n3m55w8
Comment
All this AI talk has got me thinking of the TV series Person of Interest again.
To create The Machine or Samaritan.
One to save us or control us.
What happens when it glitches and doesn't take accountability for its actions? Would it tell us that we are wrong and then become Skynet? Would it then enslave us as batteries?
This comes down to ethics. Just because you can do something doesn't mean you should do it.
youtube
AI Governance
2024-01-30T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxt24K7wrZ6VTC9VT94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzRsOO_HboUkgGXnpx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0jyGQ5nArHq61CyV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCBjB7BxlyOtpq8N54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxlFRKTyx9E8XuIVGF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzBrxku7icoduZqAh54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgydJ6UjoO6N18aJust4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugycza7bCmNvIuCZBOl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFlZ2DUj8h5hznWuR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgweCcz5BPxx0i7R8I94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]