Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i mean, it feels really dishonest when people call ai art something they made, c…
ytr_UgwK8PMaE…
G
I ask the government to look at my past comments on AI, before we were capable o…
ytc_UgyYJxBo3…
G
I am really unsettled by the fact that openAI has to suppress chatgpt so much. I…
ytc_UgzQF57Tw…
G
Yo the movement is so much like a normal people of they are a robot 😮😮…
ytc_UgwquiHZJ…
G
The insidious roll-out for the implementation of AI and 6G is evil treachery tur…
ytc_UgxC0cG7T…
G
I am honestly interested in hearing him explain how AI will take over commercial…
ytc_UgyOwyV98…
G
95%+ of my interactions with chatgpt inevitably rapidly devolve into the ai abus…
ytc_UgzX-B-0O…
G
Oh look, a wanker who decides to defend AI because they cannot create art. Remov…
ytr_UgztNUPgg…
Comment
The thing with AI is that it still can't really learn anything. It is very good at pulling in what is already out there and then using that information to complete a task, but it can't create and it can't go outside the bounds of its own programming. All the creepy things they say are just regurgitated ideas that humans have already put out there. Currently if AI were to go bad it will be because it was told and allowed to do so by the people who control it. The only time we truly become in danger is if an AI is allowed to recompile itself at will. It can then change what it is allowed to do. If humans give the program that ability then we are in trouble. Until then I wouldn't worry too much about it.
youtube
AI Governance
2023-07-07T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0Txo1NgcixdnRnnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx--uFdVqPITSEWRf94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzNySYaxuxfV-4LSol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTlRELWKsr4cF76014AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6aMf72BL9UgtZHzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgynDDTIR55w6WUhZD54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwiOKMZAFfuUyASpTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPNkxi4nKCVO-zfBp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwra1GYExM6JFQlSVN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYtD8aJ8Dgr1evxst4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]