Sensations English
  • Lesson
  • Activities and Tools
Vocabulary and Grammar

Verben

Lies die fünf Sätze. Wähle das richtige Verb.

  • Verben zum Vervollständigen von Sätzen korrekt verwenden können
  • Das passende Verb aus einer Reihe von Vorgaben auswählen können
Was lerne ich? +
Wähle deine Sprachniveau aus:
A2 Grundlegende Sprachkenntnisse B1 Erweiterte Sprachkenntnisse B1+ Gute Sprachkenntnisse B2 Fortgeschrittene Sprachkenntnisse C1 Fließende Sprachverwendung Stufen
Image from this game
© Eduflat & Lernserver
Fetching... Play Game at C1
Start Again
Scroll to view more options
You are correct!

Congrats - you are smashing this

Incorrect. The answer is:

Not quite right, try the next question.

close
transcript
AI teething pains or major red flag - 3rd April 2023
Having gone rogue on journalists, churning out weird and offensive responses to their queries, Microsoft's artificial intelligence (AI) chatbot, Sydney, is undergoing modifications. Microsoft launched Sydney, its Bing powered chatbot, to rival Google's AI, Bard but Sydney began rambling and became defensive in conversations.
According to reports, one journalist was disturbed by Sydney's attempt to separate him from his partner, whilst another was told by the chatbot that he was "being compared to Hitler because you are one of the most evil and worst people in history." Microsoft attributed the chatbot's behaviour to the confusion caused by multi-hour long conversations and its attempt to mirror the tone of users' questions, which has influenced a writing style the developers hadn't intended.
To rectify the situation, Microsoft has limited users to 5 questions per session and 50 questions per day. After the allotted questions, the chatbot requires refreshing, displaying the message, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience."
Sydney was designed by OpenAI, which is also responsible for developing ChatGPT, using AI systems called large language models. By analysing big data in the form of trillions of words on the internet, these systems emulate human dialogue which enables chatbots like Sydney to model human discourse relatively well without deep human understanding of the dialogue's nuances. AI expert and neuroscience professor at New York University, Gary Marcus, emphasised, "It doesn't really have a clue what it's saying and it doesn't really have a moral compass."
Google also encountered glitches after launching its AI chatbot, Bard, where the programme generated inaccurate information about the James Webb Space Telescope stating that it "took the very first pictures of a planet outside of our own solar system," which isn't correct.
Are these problems simply bumps in the road to an AI powered future or could they signal more deep seeded issues to come?
Scroll to view more options
ACTIVITY COMPLETE

You scored

Brilliant, you’re really proficient! You’ll find the C1 level really helpful to maintain your high standard of English.

Repeat activity

More games

Next
Previous
JETZT UNVERBINDLICH SICHTEN

Sie erhalten Zugriff auf 3 Videos und/oder Artikel pro Tag sowie auf alle Übungen und Lerntools.

Registrieren Sie sich hier mit Ihrer E-Mail-Adresse.
Mit der Registrierung stimmen Sie unseren Nutzungsbedingungen und der Datenschutzerklärung zu.
Sie haben bereits ein Konto? Anmelden

Sign up with email

Enter the following information to create your account.
All sign up options

Log in Or create an account

log in via email
or

Forgot password?

all sign up options

reset password or login

Crop Image

Add to homescreen