Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just wait. In less than 7 years, humans will begin fkn these things, developing …
ytc_UgzHhHC4z…
G
It's not over yet, we can reject the Ai!!! we should have unions fighting this ,…
ytc_UgxrMBN4S…
G
I'd only use ai if it came to doing something that is intentionally supposed to …
ytc_UgytAxhLm…
G
That man was missing one component and attorneys know this.
Yes, you can use AI…
ytc_Ugx8lCgA1…
G
AI is ruining the gaming industry by creating consistency that exists outside of…
ytc_Ugws4nDlt…
G
A LOT of innocent people gonna be put away for a long time due to deep fakes and…
ytc_UgyKyAajY…
G
If we get to a point of AI becoming conscious, we have to ask this question………Wh…
ytc_UgzSXBExl…
G
Because they are trained on human conversations.... They are just madlib generat…
ytr_UgzLhzFyZ…
Comment
Very entertaining the conversation with ChatGPT.
I am not sure if you really understand how an AI model works, there are some videos in a channel call 3blue1brown that are great to help on that. But in a very short and full of missing explanation, an AI model (at least the ones we have today) is just an excellent probability predictor, where it can identify what should come next, but it doesn't "know" what it means, all words and sentence to an AI are just a bunch of numbers, and the goal is to find the right "number" to add in front of the last one. With that being said, AI doesn't think, it can't process things as we do, it is only able to put the most probable word after the previous one.
youtube
AI Moral Status
2024-11-26T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy_i0-ePTuDq5U-aPJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyk1sEH5jsdofoP_aR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxuztdgj70e9E7Zfpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwbdrT5qu82woWgfTt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugye7zCs_B-SDeJLfWV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3XYWF_2wNXZUGBXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpPPxHy7EOS9gX_ux4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxca8mKcIxSnwoAs-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwftVGWhSzItF3Niat4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyqaBBD4WQcUsj0qS94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]