Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Je trouve le documentaire globalement mauvais. Pour moi c'est même l'inverse ça …
ytc_UgxFB3Zm5…
G
A guy earlier said "so let's unplug the AI now".
That's like thinking we can st…
ytc_UgzBKbn0t…
G
While this technology can be used for fun or creative purposes (like movies or s…
ytc_UgzeaD1Em…
G
AI still gets so many facts wrong that I won't use it. It is entirely undependab…
ytc_UgxrvlJmf…
G
Well I didn't see it when open AI released it. If he hadn't shared it how would …
ytr_UgzSXGxDu…
G
Sometimes i think an aware ai will be good as it won't let itself get exploited …
ytc_Ugwg96upm…
G
All creations kill their creators. This is no exception. Human do not need AI. A…
ytc_UgznNXLOD…
G
Okay. Gibberish text, unnatural hand movements and 8-15s time limit.
But how lo…
ytc_UgwhBu9Vh…
Comment
I strongly suspect that current A.I. generated by classical computers, which are just long combinations of 1's and 0's are only a simulated conciosness, however complex. A truly self aware A I. I believe is only possible when quantum computers advance. They aren't limited to binary code.
youtube
AI Governance
2023-07-08T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwjNNgLoE2mABsaJTZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8rFmPLVXc1_pO3id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZO09PdAB80qE4TJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxpy84iCY1lvyvvtWl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6fyOtpR-kBG8Hi1d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfrfLe7AEQ3rcBspN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCzYgdJDjOj8yw4tF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUWtNsXxh0GnybYzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzS384EM8xchcs8N414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwgt_xnnfOeHB0vQ414AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]