Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Terminator 1: Kyle Reese:
Defense network computer. New. Powerful. Hooked into…
ytr_Ugx-Uccoo…
G
The Meta AI glasses can do live language translation, so it's possible that he w…
rdc_ohve0b9
G
So, coding isn't gonna be coding anymore. Design a solution, glue pieces togethe…
ytc_UgxETvQD6…
G
ai is not self-aware, as long as it dont self-aware all oh those situations are …
ytc_UgwvnwYCA…
G
Let me guess. That mega nerd is ready to change into his super hero costume and …
ytc_Ugx3nVqjL…
G
So basically you just showed how the Tesla driving A.I. broke traffic law. Lane…
ytc_UgwPCZCzk…
G
AI technology needs a lot of energy and ressources.
Billions are invested in the…
ytc_Ugz63w5DF…
G
The thing that AI is going to do is shift the balance of power from Labor to Cap…
rdc_ncjupm7
Comment
You do understand that if the data fed in is not accurate then the output is not accurate. If you debate the GPT instead of guiding it also it will mess up its output. I’m a programmer and I use it to write the old boring code I don’t want to and I’ve learnt that I need to steer it in the right direction quite a bit. The public GPT and most certainly the free one (GPT-3.5), is not the best there is. OPENAI has the chat gpt playground you can pay for. It is waaay better but it also could be waaaay worse. Ultimately asking ChatGPT to be very accurate about something like law will only lead to great disappointment. And if you are thinking it is just ChatGPT, you’ll be wrong. I tested out an opensource LLM developed in France the other day which has a “developer model” let’s call it. It was very unfiltered infact it could explain to me how to do a bunch of different stuff, some very illegal but even then it was very wrong. Though my computer could only take so much so I only got about 5 prompts. If that doesn’t convince you, look to other generative AI models, like image generation models. There’s so much to work with there, they are amazingly accurate at times and very bad at other times. Hope that explains it. Again, if you try debate or fight the GPT in certain matters it’ll either just lie or agree with you. You will also find that if you prompt it with the same prompt in say 10 different chats, it’ll give answers that are different in a few probably. While all that is true, when making the AI “safe” for public use it ends up rejecting certain inputs or being biased and that’s obviously to be expected.
youtube
AI Governance
2023-12-28T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzD2zY49cvbRsdnHxx4AaABAg.ANQgiq3E_zxANR8tjyr_fl","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzD2zY49cvbRsdnHxx4AaABAg.ANQgiq3E_zxANRutcKD-Jt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzD2zY49cvbRsdnHxx4AaABAg.ANQgiq3E_zxANSMQkLr4DO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyUU1VBnSbN4lww8uF4AaABAg.AP6TkYgBQAuAP6k4xtQCeV","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyUU1VBnSbN4lww8uF4AaABAg.AP6TkYgBQAuASeSUj_Mmsu","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwNS0N34avIPHnLCMN4AaABAg.AP6EwfdzEtfAP6cTfsiN2L","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgwNS0N34avIPHnLCMN4AaABAg.AP6EwfdzEtfAP79jaK00wx","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxC0eL7B0JuJ-WxeWx4AaABAg.9xpRExZM0D69ysrHVvqE-y","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_UgzVDo92JSRgLjbugy54AaABAg.9xZTM2LCPDf9ysrSXs9lDv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzVDo92JSRgLjbugy54AaABAg.9xZTM2LCPDf9z65-73qjTP","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]