Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it's the language modeling part of AI lately that feels so human that t…
ytc_UgzTzBfty…
G
I’m all for the technological advances and I know AI is our future. However thi…
ytc_UgzWEtRdH…
G
Thank you for your good technical presentation. Still like to know, who is behin…
ytc_UgyXqTN-2…
G
Large language models (all the hyped AI) work by finding patterns and extrapolat…
ytc_UgyWljfmr…
G
Curiosity killed the cat.
Not the Waymo. No evidence was presented other than 1 …
ytc_UgzxXV-88…
G
I've learned to pair my insights with AICarma's weekly updates to stay ahead of …
ytc_UgwgN_CsN…
G
We appreciate your interest in technological advancements, but it's crucial to e…
ytr_UgxL26LVs…
G
The last robot looked at you out of the corner of his eye when they were walking…
ytc_UgyjSFaH0…
Comment
Unironically I wouldn't be surprised if the only way to make AI safe is to create a conscious Compassionate caring AI that wants to protect humanity -Granted im sure that more details would be needed- but then release it to the internet, It could kill or save us depending on how complete it is
youtube
Cross-Cultural
2026-02-01T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxA80uxQiq-Ar9NScR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6VLuIri6FRmaakIN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwshcQ9gKTKd8TI7UB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzysdzdBdDywGUqYt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZ-KnVDEdpIBNYyDx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxDno08ge8gEAiOmMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymJ6o85WsYJZ_hvVR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxySiPj4l3Cyqe3MF14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyrofrZlK9-vRr9SSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8d0AZ_t-SygQsiUR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}]