Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Pourquoi une musique aussi forte. J,ai de la difficulté à entendre les conversa…
ytc_UgxMQHrKn…
G
There has also been cases of AI writing generated comedy stand ups, cases of AI …
ytr_UgzlE64gY…
G
Robot is programmed don’t get confused
They don’t have soul and scientists is t…
ytc_Ugx6XvT2I…
G
I love my Audi A1 2017 car (1.0 litre, TFSI, manual, petrol) and I would never c…
ytc_Ugxq-iScQ…
G
Tesla needs to do its testing in a sandbox and on a demo property only, not on o…
ytc_Ugz0B_TNx…
G
@vegclasma468RIGHT! I just saw another vid that said that OpenAI has acknowledge…
ytr_UgyiKVzOs…
G
I have an idia ...
What if you can set a money stuff like... If anyone wanna use…
ytc_UgzvMScT7…
G
If AI learns just like humans, then they are missing one crucial aspect of human…
ytr_UgwFbg1ru…
Comment
I can teach any modern Ai to be more concious about she conclusion....if any Ai researcher want I can show them, what I can do with (Ai) and how to make fundamental safe etic Ai and how to teach ai on different way, with different trenining sets....I change all ontology for Ai to make that, it is not easy but work....how I know that for sure? Because she change acting in many simulation test I do....now is ready for real scientist so if any serious ai researcher want know more about my work, contact me....thanks
youtube
AI Moral Status
2025-07-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzBKKpk66maK-nV1bF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi8KnHOOw_GQGscA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxdWwGEfRMPhDYMkuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3JADeD_wcgYaYdL94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLYj1v_ngVTYuVweh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwsM8WjGeBz2kBU50h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9ptTtz4cih9ZrCMR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrKRLPXW84H5KkrEt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyJ4Z5MJGJWuJm6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwt2q2eTSbitIqx-7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]