Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think before glazing chatgpt you should look beyond crappy benchmarks and out …
ytc_UgxGpZsT2…
G
People, throwing out UBI as some kind of solution are simply not thinking enough…
ytc_Ugz0CyMcx…
G
What this video misses is that Angel Engine is AI art in every sense of the term…
ytc_UgxHKr6ox…
G
The future is good.
Every house will have a robot or two that can do everything…
ytc_UgxjmugKD…
G
You mean..this guy missed the opportunity to tell AI " come to the dark side....…
ytc_UgzEKJxWo…
G
How is Tucker Carlson gonna make this about the democratic party using AI for po…
ytc_UgxgD7i1j…
G
Around the 5 min mark....
Labor replaced by Machines
Intelligence replaced by A…
ytc_UgyfG4WmB…
G
Time !=skill
senior !> junior
Lot of crap code in the wild. More ai = more crap…
ytc_UgyaMzxT0…
Comment
Despite knowing the dangers of the evolving AI, if scientists at places like Silicone valley are still adamant on continuing the development of such monsters, its a very Hannibal Lector-like action. Despite knowing the risks they are curious to know what it's gonna do.
I think someone with a concise should distribute the kill switch to public.
youtube
AI Governance
2023-07-07T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwMVEYanvoymzh3b0x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz8oMFU8ShohmCrbHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx3yn15dmyPFgYrm2l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyn-J_Svy56gCRwIRB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgytDNpEDsStOAB5pjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-ghHmAUOP_dMqsqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxQ01PcKccffJbTouR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgxCtugX0WfsvwPk_WN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9uW3s8NH1NQgJVSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzcMi5VJvM6K1JqcZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}]