Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What happens if enough people don't like the a i services that are noticeable?
…
ytc_UgwyK5L4T…
G
the AI “artists” also like to argue (or at least they have used this argument wi…
ytc_UgyIxqOBk…
G
In case it wasn't clear: it wasn't AI that caused him to harm himself - it was h…
ytc_UgzM7LiiR…
G
58:50 I DON`T think that an AI starts at a point of "NOTHING is ok. Hmm, now you…
ytc_Ugw_Q-KTl…
G
This is just one of the the classic attention grabbing techniques for pseudo-int…
rdc_ohz3pv6
G
As poetically funny as it seems do we forget the undertone of iRobot Terminator …
ytc_UgwA8L0RB…
G
Microsoft is funding openai and using gpt models in s completely YOLO way with B…
ytr_UgyjYx4UG…
G
this guy maybe exaggerating , in the real economy.....whos gonna pay for all the…
ytc_UgxwQTzcm…
Comment
I use chat gpt as a qa tester and automation engineer all the time. However you need you need to give it very specific instructions and sometimes remind it thats its wrong or say why dont we do this instead? Recently i asked it to list all my files in my pom.xml that where out of date and found about 10 it missed when i pointed one out I then said "with this in mind did you miss any others" and it found some but not all of the rest.
Its good a good tool that can make work more efficient especially when debugging but thats all it is right now.
youtube
AI Responsibility
2025-06-14T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugyy2kZGoFTsXJllDch4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwBks7ZkutXsQEaQf54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"curiosity"},{"id":"ytc_UgxfOKE2EfYTsvcSCk94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy9EmS3HtOaAccx7Dt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgyXZrk1b_ONGYNJzSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwzzmpfR4RMO1IJpZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzPzj06dt8LD8FSibd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwioQ9dip0rjviCBYl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyDUGVEbkvODL2VVNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz7GwPjRkejJzDwcUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}]