Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okay so this isn't a racist AI.
This is the AI learning from racist humans.
Do…
ytc_Ugz28I-jH…
G
Ai is already better strategically than humans. Why hasnt it taken ceo jobs? Oh,…
ytc_Ugzy0bIly…
G
I don't think it's all actually open
From what I've read most of the research w…
rdc_jkf05of
G
i saw another video and it was very disturbing and dystopian. the people who liv…
ytc_Ugx996Orm…
G
Meanwhile my issue with AI art is they are plaguing my Google search whenever I …
ytr_Ugx4cK_Ua…
G
I can promise you even a human would not be able to have seen the cat, so keep m…
ytr_UgyC-my-2…
G
If you have to ask AI about Jesus you do not know Him. John 14:9 (KJV), Jesus te…
ytc_UgxfNxDYB…
G
Wait ...ai is racist ? Am sure this is false ... I'm sure there were people of…
ytc_UgzMh2JqO…
Comment
Les IA actuelles ne sont que des algorithmes gigantesques qui dévorent du "bigdata". Elles peuvent maintenant choisir entre des "possibilités d'exécution". Ça nous donne l'impression qu'elles sont douées de raisonnement, qu'elles puissent faire des choix, à rêver qu'elles puissent avoir des émotions. En réalité ce ne sont pas des choix. Ce n'est pas du libre arbitre! Ce n'est que l'exploit remarquable d'algorithmes très avancées. Les IA ne peuvent être conscientes d'elles-mêmes. Ne peuvent comprendre la raison et l'émotion. N'auront jamais de volonté propre. Ce sont des facultés de notre esprit, des facultés de notre âme que seul Dieu peut créer. L'humain ne pourra jamais donner de libre arbitre et de conscience d'être si puissante que la nôtre à une machine.
youtube
Cross-Cultural
2023-07-26T05:0…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7QipwyaOizvLN_214AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyvMq8z6WxZF9_Hn7p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfmplxLtdryfQoF6Z4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzrrw3d9Z-VpCkoOUh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwN0ZR2gw9ui5I8AFl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0lrgfUbxTiDcnvzB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz4BJQyoaqafsVGcgJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxa5AYnw-YWgN0jpXh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtQWnWglWT_DhrHWx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5EX9Q3cekMGm0Lfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]