Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for your comment! Sophia's design is intended to be relatable and approac…
ytr_Ugwtc7-iq…
G
Hi Gaurav, you got the right answer. Kudos.
The contest is over and winners have…
ytr_UgxCu43G5…
G
As an Artist but also a big Fromsoftware fan. I see disabled people, even damn g…
ytc_UgxZFXv9o…
G
Lol , now you can use this to turn against that fake blonde woman who poses as f…
ytc_Ugyiyyr55…
G
I don't agree with the conclusion at all. Firstly, it is widely accepted that L…
ytc_UgxY8wquW…
G
When an artist displays his art publicly he consents to human analysis, praise, …
ytc_UgwSswaQ7…
G
@True_Demi-fiend The point is, it ALL depends on the training! Today's LLMs wil…
ytr_UgxQdS6GV…
G
How on earth can AI do ethics and morality? Wouldn't it just be based on inform…
ytc_Ugy_cuTFM…
Comment
Novara has said that AI is just a useless predictive text thing. A waste of time that will never do anything real. So, why do we need to worry about preventing it doing bad things? If it is just bad at doing things. I use Claude BECAUSE they did all this research into what AI will do to survive, into the morality. And they are transparent about these risks. Because they are putting into place measures to reduce them. The right, the middle and the centre left are using AI. The far left are not. Good for you. We shall see how it works out for you. Bet you will come late to the party once is is irrefutable how powerful it is and demand you be listened to about how it will be used. It will be too late then. But you won't understand that you should have engaged in the debate earlier.
youtube
2026-02-12T12:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy89wYbOv0MDjHaUJB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxmoplfwD4tIINuXLN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzzhv-uNQE8GKuhIvZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQ9NF1akJu3FY6LXd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEmQIjpuLe5ywseml4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxcv5gvGOvku9MOoXJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSnd19SdYjJ-D3vLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzNLLsmzmh9PR5REjd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzsyKoBPBleE3RxnEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx26pDkJ087IshFp0Z4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"})