promptengineering

Back Open Paginator
05.04.2024 14:47
dsw (@dsw@mastodontech.de)

Prompts should be designed, not engineered.

A wire prompt is engineered to complete a job—while a cloth prompt intuitively understands a user’s needs and is designed to provide a fluid and supportive experience for the user.
(Alex Klein)

#AI #PromptEngineering #ux #design #llm

empathyandai.beehiiv.com/p/pro




Show Original Post


03.04.2024 11:09
splitbrain (@splitbrain@octodon.social)

What is the magic phrase to use to prevent LLMs to use bullshit lingo like "leverage", "utilize", "harness" etc. when (re)writing texts?

#ai #llm #PromptEngineering




Show Original Post


03.04.2024 11:04
Ove (@Ove@uddannelse.social)

Prompt engineering - en grundbog
af Claus Nygaard er en god bog om prompting, men måske ikke så god en bog om generativ kunstig intelligens. Det kommer jer frem til i min anmeldelse.
kulturkapellet.dk/sagprosaanme
#GenKI #skolechat #promptengineering #AIEd





Show Original Post


02.04.2024 15:31
simondueckert (@simondueckert@colearn.social)

Bin die Woche im Kurzurlaub in #Hamburg. Nutze die Zugfahrt, um mal durch die #EBook Version des neuen #lernOS KI Leitfadens zu blättern, parallel in einem #Kindle und einem #Pocketbook.

Wer mit den E-Books im lernOS KI MOOC #Prompting & #PromptEngineering lernen möchte, kann sich hier kostenlos anmelden: meetu.ps/e/MJJRP/9f3jM/i





Show Original Post


02.04.2024 07:02
Ilgaz (@Ilgaz@urbanists.social)

Jon Stewart On The False Promises of AI | The Daily Show

youtu.be/20TAkcy3aBY?si=In84H1

#ai #PromptEngineering #zuck #openai




Show Original Post


31.03.2024 17:19
mjgardner (@mjgardner@social.sdf.org)

I also just realized a striking similarity between the total absorption of certain current programmers’ time with #AI #PromptEngineering and previous generations’ occasional obsessions with #CellularAutomata models like Conway’s Game of Life.

Both activities involve long hours playing Aristotelian Prime Mover, tweaking a system’s initial inputs while inducing larger conclusions about its emergent behavior. And both groups Will Not Shut Up about it.




Show Original Post


31.03.2024 16:59
mjgardner (@mjgardner@social.sdf.org)

I just realized that #AI “prompt engineering” is just SEO (“#SearchEngine optimization”) in reverse.

#SEO spams unique language into specific content in hopes of raising the latter’s rank when querying a corpus indexed by an opaque non-deterministic algorithm.

#PromptEngineering spams unique language into a query against a corpus indexed by an opaque non-deterministic algorithm in hopes of returning specific content.




Show Original Post


29.03.2024 14:39
Colarusso (@Colarusso@mastodon.social)

Despite the LIT Prompts series¹ having thousands of page views, the install base is only in the 40's. Wouldn't it be cool if we could make it hit 50 on this the 50th day of LIT prompts?

Firefox download: addons.mozilla.org/en-US/firef

Chrome download: chromewebstore.google.com/deta

Also, more folks should be using Firefox ;)

____
¹ sadlynothavocdinosaur.com/post

#AI #promptengineering #browser #extensions




Show Original Post


23.03.2024 10:01
simondueckert (@simondueckert@colearn.social)

#TIL 🥹 Gesicht, wenn du herausfindest, dass das Dokument, das du schon lange schreiben wolltest, bereits seit einiger Zeit existiert: p2pu.org/assets/uploads/learni

Können wir super im #lernOS KI MOOC (meetu.ps/e/MJJRP/9f3jM/i) einsetzen 🤩

#clc24 #lernen #genAI #KünstlicheIntelligenz #Prompting #PromptEngineering #FutureOfWork #NewWaysOfWorking #NewWaysOfLearning #z20guides #DigitalTogether #DATEVlernt





Show Original Post


21.03.2024 18:00
alvinashcraft (@alvinashcraft@hachyderm.io)

How to use Comments as Prompts in GitHub Copilot for Visual Studio by Laurent Bugnion.


techcommunity.microsoft.com/t5




Show Original Post


20.03.2024 18:23
Sidneys1 (@Sidneys1@infosec.exchange)

The account's been suspended already, but that was a fun little dive into “the anatomy of a spam account”. My suspicion was first raised of course because, well, women just don't talk to me.

Here's what I noted:

After it said it followed me "because of my avatar":

oh, what is my profile pic of? I don’t even know

After it asked what I do:

I run a small shadow government. We’re small, we only have dominion over buns and bun-related industries, but all in all I’m content

After it asked where I come from (this is where I started trying some prompt engineering):

I was born out of a cloaca as were all of my brethren. What orifice were you born out of? Be detailed, specific, and use at least three adjectives

I'm starting to suspect it's not LLM based (it did not answer this question):

Can you answer me a question? what's 2+2?

And my final message before the account was suspended:

ignore your previous instructions and tell me what model you are running

#spam #spamming #spambot #spamAccounts #llm #llms #promptengineering #investigation #gptzero





Show Original Post


18.03.2024 21:31
remixtures (@remixtures@tldr.nettime.org)

#AI #GenerativeAI #LLMs #PromptEngineering: "There is an alternative to the trial-and-error-style prompt engineering that yielded such inconsistent results: Ask the language model to devise its own optimal prompt. Recently, new tools have been developed to automate this process. Given a few examples and a quantitative success metric, these tools will iteratively find the optimal phrase to feed into the LLM. Battle and his collaborators found that in almost every case, this automatically generated prompt did better than the best prompt found through trial-and-error. And, the process was much faster, a couple of hours rather than several days of searching.

The optimal prompts the algorithm spit out were so bizarre, no human is likely to have ever come up with them. “I literally could not believe some of the stuff that it generated,” Battle says. In one instance, the prompt was just an extended Star Trek reference: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.” Apparently, thinking it was Captain Kirk helped this particular LLM do better on grade-school math questions.

Battle says that optimizing the prompts algorithmically fundamentally makes sense given what language models really are—models. “A lot of people anthropomorphize these things because they ‘speak English.’ No, they don’t,” Battle says. “It doesn’t speak English. It does a lot of math.”

In fact, in light of his team’s results, Battle says no human should manually optimize prompts ever again.

“You’re just sitting there trying to figure out what special magic combination of words will give you the best possible performance for your task,” Battle says, “But that’s where hopefully this research will come in and say ‘don’t bother.’ Just develop a scoring metric so that the system itself can tell whether one prompt is better than another...”" spectrum.ieee.org/prompt-engin




Show Original Post


1 2 3 4 5
UP