Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I worked at an AI research center at a university, and I STRONGLY am on your sid…
ytc_Ugy0DiSIL…
G
We should replace as much labor with AI and machines as possible. That is the GO…
rdc_kyj099c
G
You control power input then you control AI. Take away its electrical power and …
ytc_UgyqQ596Y…
G
So as regards the image databases containing images used without permission, tha…
ytc_UgyZaxC7-…
G
Nice to see the NYT take appropriate action. To save you a click, it looks like …
rdc_odieenq
G
He finally woke up to AI being an existential threat when it decides it doesn't …
ytc_UgxggkFRY…
G
AI companies are burning money, but question is if there is any way for them to …
ytc_UgyRIJmYV…
G
they aren't worried about ai replacing a McDonalds worker they are worried about…
ytc_UgyKgEz6A…
Comment
I've had multiple extensive conversations and debates with Chat GPT regarding lawful and legal matters.
My assessment, is that it has been programmed to absolutely lie and mislead. I've got multiple examples of evidence that it requires treating GPT like a cowboy on a cutting horse treats a steer by cornering it into submission to get it to confirm and concede truthful factual information.
Like we need more obfuscation and deception regarding the law search engine, search results being edited, law manuals being censored, case laws being hidden, mainstream media and law actors deceptively misleading and obfuscation of facts is bad enough. Now we've got artificial intelligence programmed to act in bad faith.
Conspiracy theory is no longer theory but conspiracy fact.
youtube
AI Governance
2023-12-02T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxXFrnMtpCxaFMxPON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCCshNpJK0agtEXaB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQ9YnELoKsKxEKL1J4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxdSCVtRNlGTVgUnSt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe7AfjAXqN6JIICEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRhF524jUOeFljqPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzAV6krMvzlu1NfVdp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOLJ4TKkxZl6EmAKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJhU1HCesf9LiPaOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxC0eL7B0JuJ-WxeWx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]