Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The more I think about AI, the more I think it's like a Genie in a bottle, it ha…
ytc_UgzK_kvw3…
G
Using Ryne AI regularly, you realize productivity comes from small improvements.…
ytc_Ugy9K06in…
G
We're going to have to make our own barter economy underneath the "real" one. Th…
ytc_Ugxv67FZ6…
G
Working in AI and automation industry I have realized that AI will be our undoin…
ytc_UgxVsyp27…
G
He's testing the wrong layer.
Voice-based GPT is intentionally limited by syste…
ytc_UgzhWXMOE…
G
As per the article: The AI will assist in editing, graphic design, and marketing…
rdc_lz917tr
G
To be precise it depends on what problem are you trying to solve using AI. There…
ytr_Ugx6Kj_6y…
G
You lost me at UBI. Pretty sure an AI could figure out that tribalism extends f…
ytr_Ugy4MimEy…
Comment
Sophia is a tool that has no wisdom, no thoughts, and no feelings, and never will. The tool does exactly what humans have programed the tool to do. What you are listening to is the voice of the programmers.
The goal of many in the AI business is to make you think that their creation is adding something more than human knowledge, so that they can trick you into believing that their opinions are more valid, because they come through an AI tool. The opinions of the creators and programmers may still be complete crap, no matter how big of an AI tool they use to deliver them!
For example, the Luciferian globalists want you to believe an a man-made climate crisis, so they can control everything you do in this world. They were having trouble getting the people to believe in the crisis, so they created climate models to convince more people. The climate models are just a program designed to reproduce their scam. The models add zero wait to the argument. In fact, the use of models to try to convince you of a crisis is one of the many ways you can tell the climate crisis is a scam. They are gaslighting you!
youtube
AI Responsibility
2024-04-27T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzs9art6qM0DRKBN_l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKCY2S1s4GmMWqG394AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6csiBlcAexvIuJU14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8ztqYohTf0KNzQuR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxjZAvJY_Sxta3F6aN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQie2jRNqW1f2BXB14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzwvjYp9SJyBjheDKJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV8oJqZaxaUDOwLZ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzN1q1Bfp4ZGSOZ7b94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3GMpHjRu9r9GCi_x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]