Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
stop letting the ai use race and sex as factors in its calculations and this wil…
ytc_UgwhZAk8x…
G
Ai is not supposed to be used for military purposes, that's great but how about …
ytc_Ugw6pBFqR…
G
We are so screwed. We can't even get people on board with fighting climate chang…
ytc_Ugz4Crykg…
G
I think the future will see people/companies paying effectively subscription fee…
ytc_UgzpNCvuv…
G
Too many are lost on the real matter at hand, even this engineer. The moment one…
ytc_UgwEZDBG1…
G
The best thing about art is being able to say that you made it, even if it is re…
ytc_UgxBAfT_e…
G
No reason to be scares. You can always pull the plug and live in nature.…
ytc_UgweeZtI3…
G
No matter how intelligent are human there are stupid
send the robot where human…
ytc_Ugx_n0Y3F…
Comment
The worst danger is that AI would be smart enough to lull people into complacency and to be given too much trust, then it would turn around and occasionally do unexpected, completely stupid things. Basically, a program "bug ", but much more difficult to anticipate, debug or avoid with thorough testing, than with conventional programmimg.
AI is not so dangerous to the degree that it is relegated to a pure computing and advisory role, not given "arms and legs" to physically impact the environment. You would not want to trust it with mission-critical functions such as deciding when to launch missiles.
youtube
AI Responsibility
2024-07-03T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywaZxZTXKnNvrcKeF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0rw4n58jJFtT7TZR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpRJrxDazs-y0P1Sp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyNNRyLrIiuHG7Liax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxT0wZ7593zk5UqybJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzoJtaRJeWkyReoWIl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3AJ_yXIK7-79kMeh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys_s7wKRTMhJcxAYp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzvZhBawyhhTulkBo94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzr1QL5ttXHh6x9F_V4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}
]