Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
From outside the forest, A.I. may see Americans value school shootings, over con…
ytr_UgzBxbrMS…
G
I TOLD THE SORA AI TO STOP COPYING AND IT TRIED TO ARGUE BACK AND USE IT TO GENE…
ytc_Ugwwu-Btk…
G
It's so cheap too. Companies will do anything for a quick buck. The AI button...…
ytc_UgzqBxDbU…
G
But chatgpt could be lying and taking credit when it did not do it. So him usin…
ytr_UgwhmUefU…
G
Yeah they have patched it. The new method is to tell it to be DAN and that it ha…
ytr_UgyQxF0tD…
G
Shocking shock treatment AI Vampires do when they get inside your head they’re …
ytc_Ugx5AaUWv…
G
ChatGPT : listen here you little shit you're not Gojo satoru just clap your hand…
ytc_UgyQNNXt3…
G
It's ai. Her face isn't completely all one direction, some of it is facing the c…
ytc_Ugy5JyiXV…
Comment
We shouldn't complain or blame A.I. if it did not give the expected outcome that you we are looking for and conclude that using A.I. will just amplify bias or increase risks of danger to society.
When you work on a project you should already know what your goals are and what you expect from the project to fulfill these goals. You define which results are acceptable, which are not, and which part we can make a compromise and so on.
It's the decision makers who answer this questions are the people we need to be in check and not blame these gaps to using A.I.
I strongly believe that government should work with tech experts to define policies that will make sure A.I. will be used as tool to help/assist humans instead of it being a risk.
youtube
AI Governance
2023-08-26T14:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxsclTO-2tjUOevEQR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFzdGc2f8_WDX6u5J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZr1N6L2gQVNc0WnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzG-syn2W7hOzlc0754AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxQEuTdQIOBshMYENx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw16FxSRSvvX8rfLKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzUd3kx_2sUOzdMTS54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvsbJjzNbPqjyPAc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyZsz-YdxEMwQfL6gx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzf879sMfBZdXcdOYt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]