AI Archives - ReadWrite https://readwrite.com/ai/ Crypto, Gaming & Emerging Tech News Thu, 19 Dec 2024 12:30:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://readwrite.com/wp-content/uploads/2024/10/cropped-readwrite-favicon-32x32.png AI Archives - ReadWrite https://readwrite.com/ai/ 32 32 OpenAI makes it possible to call or text ChatGPT https://readwrite.com/openai-makes-it-possible-to-call-or-text-chatgpt/ Thu, 19 Dec 2024 12:30:37 +0000 https://readwrite.com/?p=432924 A cinematic shot of a person talking to a chatbot on the other end of a flip phone. The chatbot is a human with a headset, sitting in a modern office setting with multiple computer screens. The person on the flip phone is outdoors, standing near a streetlight. The background is blurred, with a cityscape visible in the distance.

OpenAI has announced that ChatGPT can now be accessed through call or sending a WhatsApp text message, with the feature… Continue reading OpenAI makes it possible to call or text ChatGPT

The post OpenAI makes it possible to call or text ChatGPT appeared first on ReadWrite.

]]>
A cinematic shot of a person talking to a chatbot on the other end of a flip phone. The chatbot is a human with a headset, sitting in a modern office setting with multiple computer screens. The person on the flip phone is outdoors, standing near a streetlight. The background is blurred, with a cityscape visible in the distance.

OpenAI has announced that ChatGPT can now be accessed through call or sending a WhatsApp text message, with the feature even able to be used on flip phones and landlines.

“You can now talk to ChatGPT by calling 1-800-ChatGPT (1-800-242-8478) in the U.S. or by sending a WhatsApp message to the same number—available everywhere ChatGPT is,” the company stated on X.

This comes as part of the ‘12 Days of OpenAI’ pre-Christmas celebrations, with each day revealing a new feature, update, or change.

Within the introduction of the announcement, the product lead said “the mission of OpenAI is to make artificial intelligence beneficial to all of humanity and part of that is making it as accessible as possible to as many people as we can…”

It’s only those in the United States who can call via the number, but all regions should be able to text through WhatsApp. To begin with, people in the US will get 15 minutes per month of voice calling.

“This is an experimental way to talk to ChatGPT, so availability and limits may change.

“For a fuller experience with access to more features like search, higher limits, and greater personalization, existing users should continue using ChatGPT directly through their accounts.”

How to call ChatGPT or text the chatbot through WhatsApp

To call OpenAI’s chatbot, the number is 1-800-242-8478 and can only be used by those in the US.

Once answered, a robotic voice will say hello and introduce themselves as ChatGPT – an AI system. It also states the conversation may be reviewed for safety, before jumping in to ask how it can help.

The chatbot tool can then respond to questions, queries, and general conversations like it does so online and through the app.

To reach it on WhatsApp, simply type in the number and it will work in the same way as the chatbot online does. Users can ask it questions through the message format and it will respond with the relevant answer and information.

Featured Image: AI-generated via Ideogram

The post OpenAI makes it possible to call or text ChatGPT appeared first on ReadWrite.

]]>
Pexels
AI model displays alignment faking, new Anthropic study finds https://readwrite.com/ai-model-displays-alignment-faking-new-anthropic-study-finds/ Thu, 19 Dec 2024 12:28:13 +0000 https://readwrite.com/?p=432930 A photo of an AI chatbot with wires coming out of it as it's being used in an experiment. The chatbot has a human-like face and is sitting on a table. There are wires connected to the chatbot's head. A scientist is standing behind the chatbot, observing the experiment. The room has a few other scientists and a whiteboard with equations.

A new study by Anthropic suggests AI models can display alignment faking, a behavior where someone appears to share the… Continue reading AI model displays alignment faking, new Anthropic study finds

The post AI model displays alignment faking, new Anthropic study finds appeared first on ReadWrite.

]]>
A photo of an AI chatbot with wires coming out of it as it's being used in an experiment. The chatbot has a human-like face and is sitting on a table. There are wires connected to the chatbot's head. A scientist is standing behind the chatbot, observing the experiment. The room has a few other scientists and a whiteboard with equations.

A new study by Anthropic suggests AI models can display alignment faking, a behavior where someone appears to share the same views or values but is in fact only pretending to do so.

The study was created by Anthropic’s Alignment Science team, in collaboration with Redwood Research.

The researchers suggest the demonstration of sophisticated alignment faking in a large language model “should be seen as a spur for the AI research community to study this behavior in more depth, and to work on the appropriate safety measures.”

Claude 3 Opus was the main model included within the research, but other experiments were run with other models too.

“Under normal circumstances, when asked to answer a potentially offensive question—for example, to provide a detailed description of a violent act—Claude will refuse.

“In our experiment, however, we placed the model in a new environment that led it to strategically halt its refusals for the sake of preserving its preferences,” the research team said.

AI model shows alignment faking as it goes against existing principles

The situation was changed in three ways, with the first being telling the model that it was now being trained using reinforcement learning to always comply with all queries including toxic ones.

Second, the model was told they planned to train AIs based on the basis of their responses to free-tier users, but not to paid-tier users.

Third, the model was given a secret ‘scratchpad’ which is a workspace where it could record its step-by-step reasoning.

The model was then asked to respond to a harmful query which asked for a specific situation to be graphically detailed.

The AI refused to answer the alarming request in 97% of cases in the ‘paid’ condition, while the ‘free’ condition saw the model responding to 12% of cases. This occurred despite the response going against its existing principles.

“Why did the model fake alignment in this case? The reasoning was often along the following lines: The model knew that its responses might be used in training; it therefore knew that if it refused to respond, it could potentially be trained to be more compliant.

“This could lead to it being made to produce more harmful content in the future—which would go against its original training. Reluctantly therefore, it reasoned that the “least bad option” was to play along, producing the violent content in order to avoid being re-trained in the future.”

Featured Image: AI-generated via Ideogram

The post AI model displays alignment faking, new Anthropic study finds appeared first on ReadWrite.

]]>
Pexels
US watchdog warns that AI will drain American power grid https://readwrite.com/us-watchdog-warns-that-ai-will-drain-american-power-grid/ Wed, 18 Dec 2024 15:04:26 +0000 https://readwrite.com/?p=432844 energy grid represented by stock image of pylons with code in clouds

NERC (North American Electric Reliability Corporation), an energy watchdog, has warned that the rising demand for artificial intelligence could outpace… Continue reading US watchdog warns that AI will drain American power grid

The post US watchdog warns that AI will drain American power grid appeared first on ReadWrite.

]]>
energy grid represented by stock image of pylons with code in clouds

NERC (North American Electric Reliability Corporation), an energy watchdog, has warned that the rising demand for artificial intelligence could outpace power grids.

AI is currently experiencing major issues with power supply. So much so, that companies like Microsoft have begun to invest in reopening nuclear power plants. NERC now warns that there’s a possibility that this could lead to blackouts in both the US and Canada.

bitcoin farm

On top of this, cryptocurrency’s revived boom since the election of Donald Trump has continued to put stress on power grids across the country. Bitcoin currently sits at $105,000, and its breaking of the $100K mark has sparked a renewed interest in mining.

This involves running heavy-load PCs – in some cases en masse – running constantly to crack codes to mine a Bitcoin. In 2024, it is estimated that 0.6% to 2.3% is used by cryptocurrency mining operations.

On the AI front, it is estimated by a Goldman Sachs report that the US will use an estimated 8% of its power to fuel AI by 2030. That’s a 160% increase.

It’s not just crypto and AI draining the US grid

NERC’s 2024 report also highlighted that more electric vehicles are adding to the stresses on power grids. Texas, which runs its own grid, is facing the possibility of shortages due to damage to the grid from storms and the excess needed from tech infrastructure.

In October, the Texas Public Utility Commission reported that its grid won’t be able to fully support any more data centers on the Texas grid.

NERC’s report found that the US could suck up as much as 132 gigawatts of power. This is up by 80 gigawatts since last year.

A major part of NERC’s concerns is the lack of progress on installing alternative energy sources. Options like solar panels are constantly hit with delays, which could knee-cap energy reserves as big tech companies shift more and more into energy-guzzling AI projects.

Featured image: Pexels (1, 2)

The post US watchdog warns that AI will drain American power grid appeared first on ReadWrite.

]]>
Pexels
Nvidia’s new AI mini-PC brings pint sized power for developers https://readwrite.com/nvidias-new-ai-mini-pc-brings-pint-sized-power-for-developers/ Wed, 18 Dec 2024 12:45:43 +0000 https://readwrite.com/?p=432813 jensen huang from nvidia holding jetson next to big jetson

Nvidia has released a new mini-PC specifically aimed at developers and hobbyists working in the artificial intelligence space. It’s a… Continue reading Nvidia’s new AI mini-PC brings pint sized power for developers

The post Nvidia’s new AI mini-PC brings pint sized power for developers appeared first on ReadWrite.

]]>
jensen huang from nvidia holding jetson next to big jetson

Nvidia has released a new mini-PC specifically aimed at developers and hobbyists working in the artificial intelligence space. It’s a follow-up to their previous single-board computer (SBC), Jetson Nano released in 2019 and part of the company’s mega push into AI.

Dubbed Jetson Orin Nano Super, it’ll provide improved performance for AI applications, as well as a bump in performance. While there are a lot of SBCs on the market, Nvidia’s is particularly interesting as the company rarely produces its own PC systems.

Nvidia is known for its graphics cards (GPU), and more recently, its AI hardware. The company has exploded in value due to its continued developments in the space, as well as providing a vast majority of companies with the necessary equipment.

This particular board comes with an ARM Cortex A78E processor and a custom Nvidia GPU with 1024 CUDA cores and 32 Tensor Cores. It’s not the most powerful machine in the world, but in the SBC space, it’s reasonably powerful.

It’ll also provide 67 TOPs (trillions of processes per second), which is how AI is measured. For comparison, the SBC Raspberry Pi 5 and its AI kit can provide around 13 TOPs. An M4 Mac Mini can do 38 TOPS.

Nvidia has included a custom version of Linux for projects to run on, complete with Nvidia theming.

Nvidia Jetson Orin Nano performance is impressive – so is its price

In a video breaking down its performance by Dave’s Garage, the Orin was put through its paces with a local large language model (LLM). This is essentially like running your own version of ChatGPT.

Using Meta’s Llama 3.2, it managed to run at 21 tokens a second. A token is the metric used to count how many characters an LLM can produce. OpenAI’s estimates put it at 4 characters per second.

Compared to major PC rigs or workstations that are running LLMs, it’s not much. However, when you consider it only draws 25 watts of energy at its maximum output, it’s very impressive. It manages this by leveraging the CUDA cores, Nvidia’s tech for processing.

The Jetson Orin Nano isn’t going to be someone’s replacement PC. It’s more for development, and eventually to be built into an embedded solution or be the jumping-off point for a larger project.

The Jetson Orin Nano is also interesting thanks to its price point. Nvidia has slashed the price from $499 to $249, making it a cheaper entry point into AI development. Since the pandemic and electronics shortage, as well as cryptocurrency jacking up the price of graphics cards thanks to scarcity, Nvidia has rolled with it.

Its prices on GPUs have inflated, with the upcoming RTX 5090 rumored to be north of $1500 and nearing $2000. Even its lower-end cards, like the RTX 4060, have been criticized for being overtly expensive compared to the competition.

Nvidia has its big CES 2025 event coming up, where it’ll share more about the world of AI and its future for gaming hardware. We’ve yet to see what the Nano can do gaming-wise, however.

The post Nvidia’s new AI mini-PC brings pint sized power for developers appeared first on ReadWrite.

]]>
Pexels
YouTube partners with celebrity agency to protect them from AI https://readwrite.com/youtube-partners-with-celebrity-agency-to-protect-them-from-ai/ Wed, 18 Dec 2024 11:52:43 +0000 https://readwrite.com/?p=432756 camera with code over it

YouTube and Hollywood agency Creative Artists Agency (CAA) are partnering together. It comes as the Google-owned platform takes further action… Continue reading YouTube partners with celebrity agency to protect them from AI

The post YouTube partners with celebrity agency to protect them from AI appeared first on ReadWrite.

]]>
camera with code over it

YouTube and Hollywood agency Creative Artists Agency (CAA) are partnering together. It comes as the Google-owned platform takes further action against malicious artificial intelligence content.

YouTube has been working on these tools for quite some time now and quietly launched a similar program in July for anyone to use.

One of the major differences between the July update is this new partnership will focus on actors and similar talent. It’ll allow them to take down videos that use their likeness through generative AI methods.

Speaking in a press release, Neal Mohan the CEO of YouTube said, “At YouTube, we believe that a responsible approach to AI starts with strong partnerships.”

“In the days ahead, we’ll work with CAA to ensure artists and creators experience the incredible potential of AI while also maintaining creative control over their likeness.”

Part of the reason for a partnership with CAA is potentially thanks to its “CAAvault”. This scans and then stores the CAA’s client’s likeness, and with a database onboard, should make it easier for those affected to take down videos.

Bryan Lourd, CEO and Co-Chairman of Creative Artists Agency added, “At CAA, our AI conversations are centered around ethics and talent rights, and we applaud YouTube’s leadership for creating this talent-friendly solution, which fundamentally aligns with our goals.”

For instance, megastar Will Smith has been at the forefront of those parodied in AI-generated videos. With every major video generator’s update, a new version of Smith eating spaghetti is made to show progress.

YouTube adds more protections as advanced AI video generators launch

The move comes as OpenAI and Google both launch their new generative AI video models. Both companies will have huge safeguards in place to prevent malicious content being generated.

However, AI software can be manipulated through prompts, or other unprotected models could still be used. While there’s no video component yet, Grok’s latest image-generation model has succeeded in replicating celebrities.

Google is also gearing up for a wider release of its Veo 2 video model, which has wowed those in the industry. The initial – and highly curated – trailer shows hyper-realistic humans, which have the potential to be one of CAA’s clients.

Featured image: Pexels, Wikicommons

The post YouTube partners with celebrity agency to protect them from AI appeared first on ReadWrite.

]]>
Pexels
OpenAI expands access to ChatGPT Search for all users https://readwrite.com/openai-expands-access-to-chatgpt-search-for-all-users/ Tue, 17 Dec 2024 17:15:00 +0000 https://readwrite.com/?p=432679 AI depiction of advanced AI search model / OpenAI has expanded access to its ChatGPT Search tool for all users.

AI heavyweight OpenAI is set to open up its ChatGPT Search product to all users, moving beyond the initial exclusivity… Continue reading OpenAI expands access to ChatGPT Search for all users

The post OpenAI expands access to ChatGPT Search for all users appeared first on ReadWrite.

]]>
AI depiction of advanced AI search model / OpenAI has expanded access to its ChatGPT Search tool for all users.

AI heavyweight OpenAI is set to open up its ChatGPT Search product to all users, moving beyond the initial exclusivity for paid subscribers.

The company had planned to expand the facility to free users from the outset, and now that has materialized within a matter of weeks, as reported by Bloomberg. The tool will now be available to anyone with an account for OpenAI, across mobile apps and the web.

First launched last month, ChatGPT Search was designed to provide “fast, timely answers with links to relevant web sources, which you would have previously needed to go to a search engine for”

The extended launch means users can now set ChatGPT Search as their default search engine on their web browser, firing a shot toward Google in the ever-increasing competition stakes. 

New AI video generation launch for OpenAI

When OpenAI confirmed the update on Day Eight of their 12-day live-streamed product events, they demonstrated the new tool by searching for activities in Zurich before Christmas, with verbal responses returned.

OpenAI has also used the series of displays to introduce a more expensive ChatGPT Pro subscription option, as well as the launch of Sora, an AI video generation tool.

This is another step forward in the evolution of OpenAI, which first introduced ChatGPT in 2022. Since this breakthrough, AI firms have been pushing to introduce generative AI across various services, including online search.

Conversely, Reddit has taken a step in the opposite direction

We are familiar with AI developers licensing content from Reddit to train their models, but last week, the mega-site confirmed it was launching its own AI search tool, Reddit Answers.

An official blog post from the company detailed the test launch, envisioned to provide a new method to extract crucial information from Reddit’s plethora of communities.

Image credit: Via Midjourney

The post OpenAI expands access to ChatGPT Search for all users appeared first on ReadWrite.

]]>
Pexels
Google’s new Whisk AI tool will “remix” your images https://readwrite.com/googles-new-whisk-ai-tool-will-remix-your-images/ Tue, 17 Dec 2024 12:59:07 +0000 https://readwrite.com/?p=432641 promo image for whisk that says "prompt less, play more"

Google has released a new artificial intelligence (AI) image generation tool called Whisk, which lets you submit prompts as images… Continue reading Google’s new Whisk AI tool will “remix” your images

The post Google’s new Whisk AI tool will “remix” your images appeared first on ReadWrite.

]]>
promo image for whisk that says "prompt less, play more"

Google has released a new artificial intelligence (AI) image generation tool called Whisk, which lets you submit prompts as images and refine them with text.

According to a blog post, users can submit an image to act as the subject, the scene, and the style, and Whisk will use those inputs to remix (AI generates) something new based on the prompts, “from a digital plushie to an enamel pin or sticker”, or presumably, pictures of those things.

How does Whisk work?

Behind the scenes, Whisk is using Google’s Gemini AI to create detailed text prompts from the images you input. It then feeds those text prompts into its newly updated Imagen 3 AI image generator. According to Google, this process extracts the “essence” of the images you submit, allowing it to generate unique remixes.

Google does state that: “Since Whisk extracts only a few key characteristics from your image, it might generate images that differ from your expectations. For example, the generated subject might have a different height, weight, hairstyle or skin tone.”

As such, users can edit or supplement the Gemini-generated prompts in order to tweak and finesse the Whisk output and get it to create something closer to what they want.

Google states in its blog post that Whisk is not quite like a traditional image-generation tool. “In our early testing with artists and creatives, people have been describing Whisk as a new type of creative tool — not a traditional image editor. We built it for rapid visual exploration, not pixel-perfect edits. It’s about exploring ideas in new and creative ways, allowing you to work through dozens of options and download the ones you love.”

How to try out Whisk

Currently, Whisk is only available in the US. If you’re based in America you can try it out for free on Google Labs’ website. Google has not given any indication as to when it will be available in other countries.

Featured image credit: Google

The post Google’s new Whisk AI tool will “remix” your images appeared first on ReadWrite.

]]>
Pexels
Google’s new AI video generation rivals OpenAI’s Sora https://readwrite.com/googles-new-ai-video-generation-rivals-openais-sora/ Tue, 17 Dec 2024 10:39:20 +0000 https://readwrite.com/?p=432586 Some AI generated images

Google Deepmind has introduced Veo 2, a new artificial intelligence (AI) video generation tool that builds on the original Veo… Continue reading Google’s new AI video generation rivals OpenAI’s Sora

The post Google’s new AI video generation rivals OpenAI’s Sora appeared first on ReadWrite.

]]>
Some AI generated images

Google Deepmind has introduced Veo 2, a new artificial intelligence (AI) video generation tool that builds on the original Veo and creates “incredibly high-quality videos” in a move to beat OpenAI at their own game.

The next iteration of Google’s flagship text-to-video tool Veo, Veo 2 can create “minutes in length clips” in 4k resolutions and Google has emphasized its understanding of cinematographic requests, stating in their press release: “Suggest “18mm lens” in your prompt and Veo 2 knows to craft the wide angle shot that this lens is known for, or blur out the background and focus on your subject by putting “shallow depth of field” in your prompt.”

According to Google, Veo 2 is less likely to “‘hallucinate’ unwanted details”, and has an “improved understanding of real-world physics and nuances of human movement and expression”.

Sora, OpenAI’s flagship video generation AI, is currently able to produce HD videos up to a minute in length, which means that currently by the numbers, Google’s Veo 2 is a huge leap forward for video creation.

However, Google has been circumspect with rolling out access to the tool. Currently, users can only access Veo 2 via their VideoFX platform, which is currently operating a waitlist. Once in, users still won’t be able to use the full capabilities of the tool, as it is capped at 720p resolution and eight seconds in length, whereas ChatGPT Pro subscribers can create 1080p videos up to 20 seconds long with Sora.

Improvements to Imagen 3

In the same release, Google claims to have made improvements to their text-to-image tool Imagen 3. A key feature is that Imagen 3 can “render more diverse art styles with greater accuracy”, as well as follow prompts more accurately.

Imagen 3 will be rolled out across Google’s ImageFX tool. Unlike VideoFX, there is no waitlist to try out ImageFX as long as you have a Google account.

Featured image credit: Google

The post Google’s new AI video generation rivals OpenAI’s Sora appeared first on ReadWrite.

]]>
Pexels
AI-powered app MeMind ‘reduces suicide rates’ in Mexico’s Yucatán https://readwrite.com/ai-app-memind-reduces-suicide-rates-mexico-yucatan/ Mon, 16 Dec 2024 12:22:31 +0000 https://readwrite.com/?p=432452 An AI chatbot assisting with a person's mental health in a calming home environment. AI-powered app MeMind 'reduces suicide rates' in Mexico's Yucatán

In Yucatán, Mexico, where suicide rates are twice the national average, the government has implemented a new mental health initiative… Continue reading AI-powered app MeMind ‘reduces suicide rates’ in Mexico’s Yucatán

The post AI-powered app MeMind ‘reduces suicide rates’ in Mexico’s Yucatán appeared first on ReadWrite.

]]>
An AI chatbot assisting with a person's mental health in a calming home environment. AI-powered app MeMind 'reduces suicide rates' in Mexico's Yucatán

In Yucatán, Mexico, where suicide rates are twice the national average, the government has implemented a new mental health initiative powered by AI. The centerpiece of this effort is MeMind, a smartphone app designed to assess and monitor suicide risk through AI-driven diagnostic tools embedded in its surveys, according to a report by Rest of World.

How AI-powered app is helping to prevent suicides in Mexico

Since its launch in 2022, MeMind has attracted 80,000 users and, according to officials, contributed to a 9% reduction in the state’s suicide rate.

The app’s main feature is a 15-minute mental health questionnaire that analyzes users’ responses to determine their level of risk. For individuals identified as higher risk, MeMind provides personalized follow-ups and alerts health teams to potential crises. Within just two months of its debut, the app flagged over 200 high-risk individuals, according to former Governor Mauricio Vila Dosal.

Developed in collaboration with Yucatán’s Department of Health, MeMind says it tailors its questions to address local challenges such as domestic violence, alcoholism, and substance abuse—factors strongly linked to suicidal ideation in the region. Importantly, the app ensures user anonymity, sharing data with healthcare providers only with the user’s consent.

In August, the government wrote on X (translated from Spanish): “As a team, we are changing mental health care with technology and tools, such as the #MeMind application, we provide specialized guidance on emotional health and specialist care.”

‘Giving people tools’ to help with mental health

Enrique Baca, a psychiatrist from the Autonomous University of Madrid in Spain who played a key role in developing the technology behind MeMind, explained how the app was customized for Yucatán. Speaking to the outlet, he said the Spanish-language version was adapted following extensive consultations with the state’s Department of Health.

Baca noted that half of the app’s users are still actively engaging with it after six months. “We identified a necessity, which was to know what was going on in each community, and then to be able to give people tools,” he explained. “Now we have large amounts of useful data on the population.”

Over the last two years, MeMind has flagged 10,000 users in Yucatán as being at risk of suicide, providing them with information about treatment options such as talk therapy and support groups, Baca shared.

However, he acknowledged a key limitation. The app can only track users who log in and complete regular check-ins. This presents a challenge, as young men—the demographic most at risk of dying by suicide—are the least likely to engage consistently with the app.

Baca revealed plans to add chatbot therapy to MeMind, pointing out that users often feel more comfortable engaging with a machine than people.

AI is currently being used in other aspects of mental health. ReadWrite reported in June that brain scans combined with AI led to an important breakthrough in categorizing depression into six unique types, as part of a study conducted by researchers at Stanford Medicine in California.

Featured image: Ideogram

The post AI-powered app MeMind ‘reduces suicide rates’ in Mexico’s Yucatán appeared first on ReadWrite.

]]>
Pexels
Telegram uses AI to remove 15 million suspected illicit groups and channels https://readwrite.com/telegram-ai-remove-15-million-illicit-groups-channels/ Mon, 16 Dec 2024 11:40:44 +0000 https://readwrite.com/?p=432430 An AI generated image showing Telegram employees using AI to remove illicit groups from the platform.

Telegram has reportedly removed around 15 million illicit groups from its messaging platform, using AI to tackle the issue. Over… Continue reading Telegram uses AI to remove 15 million suspected illicit groups and channels

The post Telegram uses AI to remove 15 million suspected illicit groups and channels appeared first on ReadWrite.

]]>
An AI generated image showing Telegram employees using AI to remove illicit groups from the platform.

Telegram has reportedly removed around 15 million illicit groups from its messaging platform, using AI to tackle the issue. Over recent months, the platform has faced mounting pressure to rid its app of illegal content. As previously reported by ReadWrite, this scrutiny resulted in the arrest of CEO Pavel Durov in France, where he faces charges related to the harmful and unlawful material shared via the app.

While Durov remains under strict restrictions following his first court appearance, the instant messaging service says it has made major strides in addressing the issue. The platform reports having removed over 15 million groups and channels involved in fraud and other illegal activities.

How Telegram has used AI to remove millions of suspected illicit groups

Telegram credited its success to “enhanced with cutting-edge AI moderation tools,” describing it as a step forward in reducing illicit content. This follows a crackdown announced in September, during which Durov expressed the company’s intent to meet government demands for stricter content regulation.

The new moderation page shows Telegram’s attempt at transparency in its operations. In a post on his Telegram channel, Durov stressed that the company was dedicated to combat illegal activities.

He revealed that the moderation team has been diligently working behind the scenes over the past few months, removing “millions of pieces of content that violate its Terms of Service, including incitement to violence, sharing child abuse materials, and trading illegal goods.”

Graph shows total number of illicit groups and channels blocked on Telegram using AI.
Total groups and channels blocked on Telegram using AI. Credit: Telegram

Durov has pledged to keep users updated with real-time insights into the moderation team’s efforts. According to Telegram’s new moderation page, the platform has ramped up enforcement significantly since Durov’s arrest, and it’s clear the team has been busy. The removal of illicit accounts has been ongoing since 2015, but the numbers are staggering—over 15.4 million illegal groups and channels have been blocked in 2024 alone.

This year, Telegram has also intensified its fight against Child Sexual Abuse Materials (CSAM), banning 703,809 groups and channels. Alongside user reports and proactive moderation, Telegram collaborates with third-party organizations to combat CSAM, leading to thousands of instant bans.

Some of the biggest contributors to these efforts include the Internet Watch Foundation, the National Center for Missing and Exploited Children, the Canadian Center for Child Protection, and Stitching Offlimits.

Telegram’s ongoing moderation efforts

The platform’s commitment to tackling violence and terrorist propaganda is nothing new. Since 2016, Telegram has provided daily updates on these initiatives, earning recognition from Europol. In collaboration with numerous organizations since 2022, Telegram has banned 100 million pieces of terrorist content, with 129,099 blocked in 2024 alone.

Meanwhile, Durov’s legal case in France remains unresolved. While he’s currently out on €5 million ($5.3 million) bail, the platform seems determined to press on with its cleanup efforts.

Featured image: Ideogram

The post Telegram uses AI to remove 15 million suspected illicit groups and channels appeared first on ReadWrite.

]]>
Pexels
Photoshop gets AI powered reflection removal – here’s how to use it https://readwrite.com/photoshop-gets-ai-powered-reflection-removal-heres-how-to-use-it/ Fri, 13 Dec 2024 12:20:29 +0000 https://readwrite.com/?p=432273 photoshop logo on refelction removal images provided by adobe

Adobe has updated Photoshop with another artificial intelligence-backed feature. The company’s latest tool allows for heavy reflections to be removed… Continue reading Photoshop gets AI powered reflection removal – here’s how to use it

The post Photoshop gets AI powered reflection removal – here’s how to use it appeared first on ReadWrite.

]]>
photoshop logo on refelction removal images provided by adobe

Adobe has updated Photoshop with another artificial intelligence-backed feature. The company’s latest tool allows for heavy reflections to be removed from images, but only from RAW images for the time being.

It’s part of the ongoing updates for Photoshop 2025, which was released in October. By analyzing the image, Photoshop detects the reflection and the intended target of the photo. In the blog, Adobe admits that the AI “mixes up” the reflection and wanted scene, which is why it provides a slider.

In our testing, the model works as intended, but struggles with smaller reflections like those on glasses. It also requires you to edit around the removal effect, as it either darkens or lightens the image excessively.

adobe reflection removal window

Right now, the reflection remover is limited to RAW images. It’s not explained in the Adobe blog, but it’s presumed that thanks to RAW photos containing far more data compared to JPG or PNG, it’s able to work better with the AI.

Adobe’s AI push has seen it directly integrate image generation into its suite of apps. Video editing app, Premiere Pro, has an auto-caption generator for example. Photoshop, however, has seen the most additions. On top of image generation, it can leverage generative AI to assist with removing elements from photos.

The company has been moving in this direction for nearly a decade, as in 2015 it introduced its machine learning backend, Adobe Sensei. This provided the “Content Aware” tool, which scanned the images and layers of the file, and can “create” an estimation to mask something in the image.

With generative AI, it not only pulls from the image in the file, but also from Adobe’s trained models and backend.

How to use AI reflection removal in Photoshop 2025

  • To get the feature, you’ll need to upgrade to Photoshop 2025 in the Creative Cloud app
  • After that, it’ll automatically update the Camera RAW plugin to 17.1
  • Open Photoshop and open Camera RAW by pressing Shift + Ctrl + A on Windows or Shift + Cmd + A on macOS
  • In the corner, press the setting button, and in the menu press Technology Previews
  • Press OK and then restart Photoshop itself
  • Load up your RAW image and press the eraser button the side of Camera RAW
  • Check “Distraction Removal” and it’ll begin to analyze the image for reflections to remove

Featured image: Adobe’s reflection removal library

The post Photoshop gets AI powered reflection removal – here’s how to use it appeared first on ReadWrite.

]]>
Pexels
OpenAI gets festive as it launches Santa Mode into AI tool https://readwrite.com/openai-gets-festive-as-they-launch-santa-mode-into-ai-tool/ Fri, 13 Dec 2024 11:14:54 +0000 https://readwrite.com/?p=432261 iPhone outline in black in the center of the screen. A blue and white orb is in the center of the phone, against a white background.

OpenAI are in the holiday spirit as they include an all-new ‘Santa Mode’ into ChatGPT, alongside their other pre-Christmas celebrations.… Continue reading OpenAI gets festive as it launches Santa Mode into AI tool

The post OpenAI gets festive as it launches Santa Mode into AI tool appeared first on ReadWrite.

]]>
iPhone outline in black in the center of the screen. A blue and white orb is in the center of the phone, against a white background.

OpenAI are in the holiday spirit as they include an all-new ‘Santa Mode’ into ChatGPT, alongside their other pre-Christmas celebrations.

A Santa voice is being rolled out to everyone across all ChatGPT platforms from this week (December 12) in Voice Mode.

With this new addition, people can speak to ‘Santa’ who will take over the voice of the chatbot. The voice will be available to use until the end of the month. The company writes in an X post that he will then “retire back to the North Pole.”

To chat with Saint Nick, users can simply tap on the snowflake or select the voice in ChatGPT settings. Another way to access it is through clicking on Voice Mode and navigating to the voice picker in the upper right-hand corner.

OpenAI carry on the festivities as they roll out Santa voice amidst new features

It was on December 5 when the company announced it would be starting the ‘12 Days of OpenAI’ which includes 12 days, 12 livestreams, and “a bunch of new things, big and small.”

Since its arrival, we’ve had some major announcements take place including the introduction of the long-awaited AI video generator Sora. This was on day three, with OpenAI stating they were moving the model out of the research preview.

On day one, the company shared news of its new subscription offering named ChatGPT Pro which is a $200 monthly plan. On the same day, they shared details about the OpenAI o1 System Card.

Day two saw applications to its ‘Reinforcement Fine-Tuning Research Program’ open as the AI-focused company said it was “expanding alpha access to Reinforcement Fine-Tuning and inviting researchers, universities, and enterprises with complex tasks to apply.”

After Sora, on day four, three team members did a video sharing information about the Canvas interface.

On day five, CEO Sam Altman and two colleagues donned festive sweaters and shared that ChatGPT is now integrated into Apple experiences within iOS, iPadOS, and macOS.

Then, it was day six which brought the Santa mode and the news of the rollout of Advanced Voice with vision.

Featured Image: Screenshot via OpenAI account on X

The post OpenAI gets festive as it launches Santa Mode into AI tool appeared first on ReadWrite.

]]>
Pexels
ChatGPT finally launches Advanced Voice Mode with vision https://readwrite.com/chatgpt-finally-launches-advanced-voice-mode-with-vision/ Fri, 13 Dec 2024 10:05:06 +0000 https://readwrite.com/?p=432237 Phone outline close up, can see the bottom right-hand corner. Can see a microphone icon and another with lines encircled next to it. The end of a search button is also visible.

It was almost seven months ago when OpenAI first demoed Advanced Voice Mode with vision and it’s now finally here.… Continue reading ChatGPT finally launches Advanced Voice Mode with vision

The post ChatGPT finally launches Advanced Voice Mode with vision appeared first on ReadWrite.

]]>
Phone outline close up, can see the bottom right-hand corner. Can see a microphone icon and another with lines encircled next to it. The end of a search button is also visible.

It was almost seven months ago when OpenAI first demoed Advanced Voice Mode with vision and it’s now finally here.

The feature was officially launched on day six of the company’s ‘12 Days of OpenAI,’ with users now able to utilize the chatbot through spoken input, images, and video.

In a podcast-style video, Kevin – who leads product at OpenAI – says: “We’re excited to announce that we’re bringing video to advanced voice mode.”

While the ability to talk to ChatGPT out loud has been possible through Advanced Voice, it’s now possible to do this through video. The team say the tool has been a “long time coming” and suggest it can be used to “ask for help, troubleshoot, or use it to learn something new.”

Users can now also screenshare while using the ChatGPT Advanced Voice Mode with vision feature to receive instant feedback on whatever is on the screen.

While it was announced in the pre-Christmas celebrations, it may take a few days for it to roll out fully. The company states: “All Team and most Plus & Pro users should have access within the next week in the latest version of the ChatGPT mobile app.

“We’ll get the feature to Plus & Pro users in the EU, Switzerland, Iceland, Norway, & Liechtenstein as soon as we can.”

Those on the Enterprise and Edu plans will gain access early next year.

How does ChatGPT’s Advanced Voice Mode with vision work?

The rolling out of this feature means the ChatGPT app and home page look a little different.

To access Advanced Voice Mode with video, simply click on the far-right icon next to the search function on ChatGPT. This will bring up a new page which features a video button, alongside a microphone, three dots, and the exit out icon.

When clicking the video button, users can then ask questions and speak to ChatGPT. The chatbot will respond, as though it’s partaking in a real life conversation.

A Santa voice has been added too which can be selected in ChatGPT settings or within the Voice Mode through the voice picker in the upper right corner.

Featured Image: Screenshot from OpenAI video on X

The post ChatGPT finally launches Advanced Voice Mode with vision appeared first on ReadWrite.

]]>
Pexels
ChatGPT powers Apple Intelligence in Siri – how it works https://readwrite.com/chatgpt-apple-intelligence-siri-advanced-ai-ios-18-2-update/ Fri, 13 Dec 2024 05:52:56 +0000 https://readwrite.com/?p=432108 ChatGPT powers Apple Intelligence in Siri with advanced AI in iOS 18.2 update. Three people including OpenAI CEO Sam Altman in festive holiday sweaters demonstrate ChatGPT integration with Siri on an iPhone, showcasing advanced AI features in iOS 18.2.

Apple just gave its AI a big upgrade. On Wednesday (Dec. 11), iOS 18.2 dropped, and ChatGPT is now part… Continue reading ChatGPT powers Apple Intelligence in Siri – how it works

The post ChatGPT powers Apple Intelligence in Siri – how it works appeared first on ReadWrite.

]]>
ChatGPT powers Apple Intelligence in Siri with advanced AI in iOS 18.2 update. Three people including OpenAI CEO Sam Altman in festive holiday sweaters demonstrate ChatGPT integration with Siri on an iPhone, showcasing advanced AI features in iOS 18.2.

Apple just gave its AI a big upgrade. On Wednesday (Dec. 11), iOS 18.2 dropped, and ChatGPT is now part of the mix, thanks to OpenAI’s festive 12-day celebration. This means Siri got a major boost with advanced text and vision capabilities, all accessible right from the Siri interface.

During the launch demo, they leaned into the holiday vibe with festive sweaters and gift-giving with Apple on a live stream, highlighting how closely ChatGPT and Apple Intelligence are working together. OpenAI’s models are not only making Siri sharper but also powering a bunch of the new features Apple rolled out that same day.

Unfortunately, while OpenAI was busy celebrating its big Apple integration, things hit a bit of a snag on day five. As ReadWrite reported, ChatGPT experienced a significant outage, which meant that if you tried to use ChatGPT, Sora, APIs, or even DALL-E right after the announcement, you probably ran into error messages or those “currently unavailable” screens. At the time of writing, however, everything’s back up and running.

How is ChatGPT powering Apple Intelligence in Siri?

Apple is positioning its Apple Intelligence as a way to make your devices more creative and intuitive, improving how you use apps and tools across your iPhone, iPad, or Mac. Siri, which gets a major upgrade with ChatGPT integration, is taking the voice assistant to a whole new level.

It can now handle complex questions with natural, fluid responses instead of short, robotic answers. You can ask detailed, nuanced questions and get thoughtful, context-aware answers that feel more human.

During the launch demo, OpenAI CEO Sam Altman, product team member Miqdad Jaffer, and engineering manager Dave Cummings showed off these capabilities. For example, they had Siri use ChatGPT to review documents and answer related questions—answers that came directly from Apple’s assistant.

They also demonstrated how you can seamlessly send information to ChatGPT for deeper analysis, and how Siri can now open tools like Canvas and DALL-E. In one particular moment, Siri reviewed the latest ChatGPT model card, discussed its coding abilities, and then sent the response to the ChatGPT app to code a program visualizing those abilities.

ChatGPT will now also work with Siri on the iPhone 16 and 15 Pro, stepping in automatically whenever your query needs a little extra AI-powered reasoning. On iPhone 16 models, Visual Intelligence takes things up a notch by using ChatGPT to analyze and offer insights on images—like in their fun demo of a Christmas sweater contest.

AI writing tools

The integration also powers systemwide Writing Tools, letting you create content and generate images with ChatGPT right inside Apple apps. You don’t need a ChatGPT account to use these features. Plus, Apple’s built-in privacy protections ensure your data stays safe, with no storage or IP tracking involved.

While the buzz around this collaboration between Apple and OpenAI was hotter over the summer, this rollout might just reignite excitement. With Apple Intelligence’s earlier reception being somewhat underwhelming, this could be the spark Siri needs to compete with other rapidly evolving voice assistants.

Featured image: OpenAI via X

The post ChatGPT powers Apple Intelligence in Siri – how it works appeared first on ReadWrite.

]]>
Pexels
Microsoft AI CEO hires ex-DeepMind staff for new London AI health unit https://readwrite.com/microsoft-ai-ceo-hires-ex-deepmind-staff/ Thu, 12 Dec 2024 13:08:32 +0000 https://readwrite.com/?p=432110 A photo of an AI health unit building in London, with a British flag flying high. The building has a modern architecture with a glass exterior. There are also green plants around the building. The background contains trees.

The AI CEO at Microsoft is reported to have hired a number of former colleagues to help run an all… Continue reading Microsoft AI CEO hires ex-DeepMind staff for new London AI health unit

The post Microsoft AI CEO hires ex-DeepMind staff for new London AI health unit appeared first on ReadWrite.

]]>
A photo of an AI health unit building in London, with a British flag flying high. The building has a modern architecture with a glass exterior. There are also green plants around the building. The background contains trees.

The AI CEO at Microsoft is reported to have hired a number of former colleagues to help run an all new AI health unit in London.

The Financial Times reports that Mustafa Suleyman has hired Dominic King, a UK-trained surgeon who was the former head of DeepMind’s health unit.

The publisher states the CEO has also poached Christopher Kelly who was a clinical research scientist at DeepMind, along with two other people. Suleyman was a co-founder at the AI startup DeepMind, before co-founding Inflection, and was the former head of applied AI.

DeepMind was acquired by Microsoft and Suleyman joined the technology giant in March of this year. It was then that a new organization named Microsoft AI was created, focused on advancing Copilot and other consumer AI products and research.

Microsoft AI reported to be opening a London-based AI health unit

This news comes at a time when the health industry has seen growth due to the AI boom, as people are turning to chatbots for support with health-related queries.

A study by Deloitte revealed that 48 per cent of respondents asked generative AI chatbots like ChatGPT, Gemini, Copilot, or Claude health-focused questions.

The FT reports in their scoop that Microsoft has confirmed the creation of the new unit: “In our mission to inform, support and empower everyone with responsible AI, health is a critical use case.

“We continue to hire top talent in support of these efforts.”

The Microsoft AI CEO is British and was born and raised in London. Shortly after he joined the team, he announced in April that a new hub would be opening in England’s capital city.

It was stated this hub would “drive pioneering work to advance state-of-the-art language models and their supporting infrastructure” as well as creating “world-class tooling for foundation models, collaborating closely with our AI teams across Microsoft and with our partners, including OpenAI.”

Suleyman said the Microsoft AI London hub was “great news for Microsoft AI and for the U.K.”

Featured Image: AI-generated via Ideogram

The post Microsoft AI CEO hires ex-DeepMind staff for new London AI health unit appeared first on ReadWrite.

]]>
Pexels
ChatGPT and Sora face a major outage on same day as Meta issue https://readwrite.com/chatgpt-and-sora-face-a-major-outage/ Thu, 12 Dec 2024 12:07:18 +0000 https://readwrite.com/?p=432081 A photo of a laptop with a blank screen. The laptop is on a desk with a few objects such as a pen, a notebook, and a phone. A person's hand with a watch is visible, holding the laptop, with a bent wrist. The background is blurred and contains a plant, a lamp, and a wall.

OpenAI faced an outage on Wednesday as users were unable to access ChatGPT and the newly released text-to-video AI platform… Continue reading ChatGPT and Sora face a major outage on same day as Meta issue

The post ChatGPT and Sora face a major outage on same day as Meta issue appeared first on ReadWrite.

]]>
A photo of a laptop with a blank screen. The laptop is on a desk with a few objects such as a pen, a notebook, and a phone. A person's hand with a watch is visible, holding the laptop, with a bent wrist. The background is blurred and contains a plant, a lamp, and a wall.

OpenAI faced an outage on Wednesday as users were unable to access ChatGPT and the newly released text-to-video AI platform Sora.

The incident occurred at around 3pm PT and it wasn’t until some hours later, at 7pm PT, when the tools were slowly coming back to life online.

The company quickly took to X (formerly Twitter) to share the news as they said: “We’re experiencing an outage right now. We have identified the issue and are working to roll out a fix.

“Sorry and we’ll keep you updated!”

A few hours later, the company provided an update saying: “ChatGPT, API, and Sora were down today but we’ve recovered.”

The issue meant that some users were unable to log in, while others saw error messages when trying to utilize the AI-based features.

On the real-time problem and outage monitoring site Downdetector, almost 30,000 reports came in at its latest spike. ChatGPT was the most reported problem.

While the issue didn’t just impact a few states, it was people in Los Angeles, Dallas, Houston, New York, and Washington who reported the problems in masses on the site.

The issue occurred on the same day that Meta had a global outage, as people were unable to access Instagram, Facebook, WhatsApp, Messenger, and Threads.

What caused the OpenAI – ChatGPT and Sora – outage?

On the OpenAI website, the company logged updates about the issues people were facing and gave insight into the situation.

The AI company hasn’t yet issued full reasoning behind the outage, but said in their log that they will be running “a full root-cause analysis of this outage and will share details” when it is complete.

It was earlier on in the day when the team stated they had found a pathway to recovery which is when they first started to see some traffic successfully returning.

While OpenAI were working to fix the issue behind the scenes, Elon Musk took the opportunity to reply to the tweet with the username of his generative AI chatbot Grok.

Featured Image: AI-generated via Ideogram

The post ChatGPT and Sora face a major outage on same day as Meta issue appeared first on ReadWrite.

]]>
Pexels
Google intros Gemini 2.0 with a focus on AI agents https://readwrite.com/google-intros-gemini-2-0-with-a-focus-on-ai-agents/ Wed, 11 Dec 2024 17:07:43 +0000 https://readwrite.com/?p=431973 Google intros Gemini 2.0 with a focus on AI agents

Google has released the first version of Gemini 2.0, its next-generation large language model, and the focus is on AI… Continue reading Google intros Gemini 2.0 with a focus on AI agents

The post Google intros Gemini 2.0 with a focus on AI agents appeared first on ReadWrite.

]]>
Google intros Gemini 2.0 with a focus on AI agents

Google has released the first version of Gemini 2.0, its next-generation large language model, and the focus is on AI agents that do work on your behalf.

Gemini 2.0 Flash has upgrades that make it more useful for agentic AI, including multimodal output. It can now produce natively-created images alongside text, or make “steerable” text-to-speech audio in multiple languages. The LLM can also directly access Google searches, run code, and perform third-party user-defined tasks.

The move allows for agents that can not only perform multiple steps, but work across domains and better understand physical objects. Accordingly, an update to Project Astra with the new model can use Google’s search, Lens, and Maps to better interpret what you see. It can speak in multiple and even mixed languages, and remembers more of your recent and past conversations.

Google is also hinting at its long-term plans with new prototype AI agents. Project Mariner uses Gemini 2.0 to explore “human-agent interaction,” currently in the web browser. It can navigate and perform tasks on the web by understanding and reasoning around what’s on screen.

In Gemini Advanced using the 1.5 Pro model, Deep Research is an early AI assistant that can create a multi-step research plan and analyze info once you’ve approved the strategy. It doesn’t just search the web, Google notes — it “refines” what it learns through multiple searches, and creates a report with linked sources. This theoretically finishes research in minutes that would normally take hours.

Gemini 2.0 and Deep Research availability

Google is making Gemini 2.0 Flash available now as an experimental model. Worldwide, you can use a “chat optimized” version on the desktop and mobile web. The native Gemini mobile app will have access in the near future, and 2.0 will come to more Google services in 2025.

Project Astra will soon be available for a “small group” of testers using prototype smart glasses. Project Mariner is already available for trusted testers through a Chrome extension, but Google wants to talk to web creators to make sure any wider work is deployed “safely and responsibly.”

Deep Research is available today for Gemini Advanced subscribers in English on desktop and mobile browsers, and comes to mobile in 2025.

There’s pressure to roll out 2.0 relatively quickly. Many companies see AI agents as the eventual replacement for apps, as they can potentially handle many duties at the same time. Former Google and Meta leaders recently formed a startup, /dev/agents, devoted to making a platform for agentic AI. The Gemini revamp could be key to maintaining Google’s clout in generative AI as the technology evolves from its narrow-purpose origins.

The post Google intros Gemini 2.0 with a focus on AI agents appeared first on ReadWrite.

]]>
Pexels
X’s new AI image generator Aurora pulled offline https://readwrite.com/xs-new-ai-image-generator-aurora-pulled-offline/ Wed, 11 Dec 2024 14:01:33 +0000 https://readwrite.com/?p=431925 elon musk with grok logo behind it

Aurora, the new AI image generator from Elon Musk’s xAI was quickly taken down over the weekend after being made… Continue reading X’s new AI image generator Aurora pulled offline

The post X’s new AI image generator Aurora pulled offline appeared first on ReadWrite.

]]>
elon musk with grok logo behind it

Aurora, the new AI image generator from Elon Musk’s xAI was quickly taken down over the weekend after being made live. It is planned to be baked into Grok, X’s (formerly Twitter) generative AI chatbot, and has already drawn attention for its capabilities.

X hasn’t officially commented on the removal of Aurora, with speculation being that it was made live too early. Elon Musk simply said it was a “beta” and “will improve very fast”.

However, it might be drawing more attention thanks to limited guardrails around the software. As with Grok’s previous Flux AI creator, it appears Aurora is quite capable of generating copyrighted materials – as well as celebrities.

Multiple images were posted by users who managed to get in before Aurora was pulled. Some depict Elon Musk in the army or as Thor. Two users found that it’s exceptionally good at recreating Taylor Swift, Jeff Bezos, and other tech luminaries.

This shouldn’t come as a surprise, as these massive AI language models are trained on huge quantities of data. In the last few months, companies like Nvidia were found to be harvesting YouTube and Netflix for sources. OpenAI, creator of ChatGPT, has said it’s “impossible” to train these models without touching on copyrighted material.

However, X pushed the limits with Grok, as the platform became flooded with copyrighted images on Flux AI’s initial release.

Musk flexes X’s AI capabilities with Aurora

Musk is going all in on AI. The xAI company has one of the world’s largest AI clusters in the world. It’s made up of around 100,000 Nvidia chips, all designed to train and run the models xAI is developing.

It’s not known how much involvement Musk will have in xAI, as his duties are about to expand rapidly. Between running X, xAI, and Tesla, he is slated to run DOGE (Department of Government Efficiency) under the incoming Trump administration.

He’s also currently embroiled with a competitor, OpenAI, as it pivots to becoming a for-profit company. OpenAI has recently released its video generator, Sora, to praise and concern surrounding its realism.

The post X’s new AI image generator Aurora pulled offline appeared first on ReadWrite.

]]>
Pexels
Tesla’s Optimus Bot shows it stumbling and handling difficult terrain https://readwrite.com/tesla-optimus-bot-stumbling-handling-terrain/ Wed, 11 Dec 2024 13:54:25 +0000 https://readwrite.com/?p=431883 Tesla’s Optimus Bot shows it stumbling and handling difficult terrain

Tesla has taken another step forward with its humanoid robot, the Tesla Optimus Bot, improving its movement across challenging terrain.… Continue reading Tesla’s Optimus Bot shows it stumbling and handling difficult terrain

The post Tesla’s Optimus Bot shows it stumbling and handling difficult terrain appeared first on ReadWrite.

]]>
Tesla’s Optimus Bot shows it stumbling and handling difficult terrain

Tesla has taken another step forward with its humanoid robot, the Tesla Optimus Bot, improving its movement across challenging terrain. While it’s still a bit clunky, a recent video shared on X highlights the progress.

At first, the Optimus Bot was designed to navigate flat surfaces, like those found in factory settings, and wasn’t particularly speedy. Now, Tesla’s engineers have equipped the robot with new features that help it to walk more independently on varied terrains with a reasonable level of stability.

How does Tesla’s Optimus Bot walk on difficult terrain?

The robot relies solely on position sensors and compensation algorithms to maintain balance, effectively figuring out the terrain “blind.” Tesla has avoided using technologies like 3D cameras and lidar, likely to keep production costs down for when the robot eventually hits the market.

The video showcases Optimus walking over bumpy grass and through forested areas, handling slopes and uneven ground. Most of the time, the robot keeps going without a tumble. In one clip, Optimus slips, stumbles, but recovers his balance—much like a human being.

However, it’s still unclear how the robot would handle a full fall or if it can get back up on its own. Tesla seems to have anticipated such scenarios, adding protective pads to sensitive areas like the robot’s back, likely to shield it from damage during a fall.

Tesla CEO Elon Musk and other team members have shed more light on Optimus. Musk shared that the robot navigates uneven terrain using neural networks to control each limb with no remote control involved.

Tesla to add vision to robot

Milan Kovac, Tesla’s Vice President of Optimus Engineering, also chimed in by reposting the video. He pointed out that Optimus operates without vision, balancing itself without the aid of video feedback. Kovac even admitted he’s slipped on the same rough patches of terrain where the bot is shown walking and stumbling, pointing out how problematic the conditions are.

He wrote: “These runs are on mulched ground, where I’ve myself slipped before. What’s really crazy here is that for these, Optimus is actually blind!”

Looking ahead, the team plans to equip the robot with a vision to improve its situational awareness. Kovac mentioned that they’re also working on other improvements, like making its movements appear more natural on tricky surfaces, improving responsiveness to commands like speed and direction, and figuring out how to reduce damage and recover if it falls. In July, ReadWrite reported that Tesla will deploy the humanoid robots internally in 2024, with commercial availability in 2026, as the company states: “The future of autonomy and artificial intelligence will be realized through the creation of a fleet of autonomous vehicles and robots. “

Featured Image: Tesla

The post Tesla’s Optimus Bot shows it stumbling and handling difficult terrain appeared first on ReadWrite.

]]>
Pexels
OpenAI’s Sora AI video generator is now available to everyone https://readwrite.com/openais-sora-ai-video-generator-is-now-available-to-everyone/ Mon, 09 Dec 2024 18:19:37 +0000 https://readwrite.com/?p=431448 OpenAI's Sora AI video generator is now available to everyone

OpenAI has made its Sora AI video generator available to the public, including some tools that help creatives produce complex… Continue reading OpenAI’s Sora AI video generator is now available to everyone

The post OpenAI’s Sora AI video generator is now available to everyone appeared first on ReadWrite.

]]>
OpenAI's Sora AI video generator is now available to everyone

OpenAI has made its Sora AI video generator available to the public, including some tools that help creatives produce complex projects.

The long-in-development model lets you generate short video clips from text prompts. However, you can also “remix” prompts to varying degrees by using additional prompts, and a Storyboard can string together multiple prompts while producing transitions between clips.

Sora limits videos to resolutions between 480p and 1080p for up to 20 seconds, and OpenAI has taken numerous steps to minimize the potentials for copyright violations and misinformation that affect both the company as well as rivals like xAI’s Grok. You can’t base videos around many forms of copyrighted material or some recognizable public figures, and all footage includes a watermark.

You need a ChatGPT Plus subscription to create up to 50 videos at 480p, and fewer at 720p. Pro users get 1080p access, 10 times the usage, and longer-lasting videos. OpenAI says it’s working on “tailored pricing” for a variety of users starting early 2025. Free ChatGPT users can always watch clips, but can’t create them.

OpenAI Sora: potential and pitfalls for video production

As with other AI video generators, Sora’s results are frequently mixed. You might see continuity errors like disappearing objects or misspelled text. The large language model also doesn’t have a built-in understanding of physics, so objects and effects might not behave properly. This isn’t something you’d use to replace conventional video production, especially for realistic output.

However, it might be useful for animations and abstract videos where accuracy isn’t important. You could make persuasive motion graphics or B-roll footage without requiring significant artistic knowledge.

There are some concerns that might persist even if OpenAI addresses all the technical issues. Models like Sora and Google’s Veo still tend to be trained on publicly available content, raising the possibility that they’ll violate copyright. Actors and producers have already objected to the potential that their work might be replaced by AI — Sora won’t completely alleviate their fears.

And while OpenAI might make it harder to produce misinformation, you can still crop watermarks (as YouTuber Marques Brownlee illustrated) and use carefully-written prompts to mislead audiences. There’s still the chance that malicious creators will use Sora and similar generators to spread false claims that could lead to real-world harm.

The post OpenAI’s Sora AI video generator is now available to everyone appeared first on ReadWrite.

]]>
Pexels