Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dear Humans : how stoopid are you ? Could you really not foresee this as one o…
ytc_UgxXdGXBw…
G
You should use it! But first learn what it's doing. So, it's basically learn how…
ytr_UgxWD3uxR…
G
Once I saw an ai artist who had good prompts and it came out decent then I saw o…
ytc_Ugx0JIvu6…
G
No, u know u already caused damaged and fucked things up. You knew we needed to …
ytc_UgwDbzZMp…
G
OK I get that using AI to make art and uploading it is unethical (and I 100% agr…
ytc_UgyHXHpIY…
G
I don't like the fatalist type of statements. I wonder if any of these people ha…
ytc_UgzOKtR_a…
G
Human brain is a machine too. All arguments against AI are just irrational and b…
ytc_UgzSgpMNB…
G
This reminds me of the mcplant or some vegetarian version of a chicken nugget. L…
ytc_UgwGYlMB1…
Comment
ChatGPT has knowledge, not intellect. It can't think using logic and has no form of biases or thought. It consists only of what people choose program into it, and the data that it learns from. It's nothing close to a true AI, and is just a databank of knowledge that gives you information on what you ask of it based on flimsy algorithms that often generate incorrect information by just predicting what word should come after the word before it based on the prompt you submitted to it. It has no clue what the words it generates mean, simply because it doesn't have the ability to think or to understand.
youtube
AI Moral Status
2023-05-09T09:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyQiWZWW5bGyqh0rTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxAufNQVUA_y8KWEu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgxGmGdncl0gKwHNcb14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugxfjz5SzSnHDFGEQLV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwo58PFitlmcf16ftd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzSVI4_BakYYY-ua-h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugzap5xWdBU5TTSi4KF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyLvbhlm9BE9EfaH5p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxi-EeasMpgclM8XHt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzNsrbGrEbnlzTSGcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"})