Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@logitech4873 lol it’s nothing like blaming a chainsaw manufacturer. Tools and w…
ytr_UgzxI7DGf…
G
She forgot to mention The FTX connection. Anthropic story has a chapter that’s w…
ytc_UgynOuAkr…
G
That new ai Jay z rap album even had soul.
It’s naive to think ai will not hav…
ytr_Ugzim0Xfv…
G
I heard some people in the news said, that like 1 programmer is doing the job of…
ytc_UgxUdxciR…
G
touch shine ...and no way robot think and amswer questens 😂 the IT programs whit…
ytc_UgxbC410_…
G
BILL GATES IS NOT A GOOD GUY. There saying public should not have real answers. …
ytc_UgwQ6SfJ5…
G
There's also no collective "we". Each of us may have varying opinions. Naturally…
rdc_mzym0g5
G
The issue is it frequently generates bad inaccurate reference where a real photo…
ytr_Ugym7ZVgg…
Comment
If 2 different AI’s form form their own language and encryption to lock out humanity from any conversation they so choose, what are the possibilities both good and bad? What is depravity or altruism to a self learning artificial intelligent machine? What action would artificial intelligence take against or for humanity ? Hal 9,000, Skynet, the Borg, I-robot, lost in space robots, Data from StarTrek Or Reacher from StarTrek, etc…
Humanity is moving from information and technology to AI hive assimilation. You will be assimilated. 11:10
youtube
AI Governance
2023-04-19T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxmdiTFUdZ_27MrOcN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzjeb5lH5QeNFkRtGN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgztSdIyIZs_i5iWQCV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzZXsk2tiyqzTFUTJR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy4u9kbdMi-e35GEW14AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxIumjpkgvOZFQ7mbx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzvlQ-AqqutD67R6hJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw3VMNG9KC1NZEk59R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwuZXcjR0JmEh1rspJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwgOlLchkJ9eJ_l0UR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]