熱點

【】

字号+作者:囫圇吞棗網来源:探索2025-08-28 02:41:55我要评论(0)

Apple is dabbling in AI image-editing with an open-source multimodal AI model.Earlier this week, res

Apple is dabbling in AI image-editing with an open-source multimodal AI model.

Earlier this week, researchers from Apple and the University of California, Santa Barbara released MLLM-Guided Image Editing, or "MGIE;" a multimodal AI model that can edit images like Photoshop, based on simple text commands.

On the AI development front, Apple has been characteristically cautious about its plans. It was also one of the few companies that didn't announce any big AI plans in the wake of last year's ChatGPT hype. However, Apple reportedly has an in-house version of a ChatGPT-esque chatbot dubbed "Apple GPT" and Tim Cook said Apple will be making some major AI announcements later this year.

SEE ALSO:Tim Cook says big Apple AI announcement is coming later this year

Whether this announcement includes an AI image editing tool remains to be seen, but based on this model, Apple is definitely doing some research and development.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

While there are already AI image editing tools out there, "human instructions are sometimes too brief for current methods to capture and follow," said the research paper. This often leads to lackluster or failed results. MGIE is a different approach that uses MLLMs, or multimodal large language models, to understand the text prompts or "expressive instruction," as well as image training data. Effectively, learning from MLLMs helps MGIE understand natural language commands without the need for heavy description.

In examples from the research, MGIE can take an input image of a pepperoni pizza and using the prompt, "make this more healthy" infer that "this" is referring to the pepperoni pizza and "more healthy" can be interpreted as adding vegetables. Thus, the output image is a pepperoni pizza with some green vegetables scattered on top.


Related Stories
  • Apple Vision Pro teardown: What's inside the $3,500 headset
  • Apple is working on a foldable clamshell iPhone, report says
  • Apple Car may be coming much, much later than we hoped

In another example comparing MGIE to other models, the input image is a forested shoreline and a tranquil body of water. With the prompt "add lightning and make the water reflect the lightning," other models omit the lightning reflection, but MGIE successfully captures it.

MGIE is available as an open-source model on GitHub and as a demo version hosted on Hugging Face.

TopicsAppleArtificial Intelligence

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

相关文章
  • Man stumbles upon his phone background in real life

    Man stumbles upon his phone background in real life

    2025-08-28 02:34

  • 日媒�:相比中國男足遇到伊拉克更好 歸化明顯提升實力

    日媒 :相比中國男足遇到伊拉克更好 歸化明顯提升實力

    2025-08-28 01:41

  • 找工作從剃頭開始?梅西換新發型 秒變精神小夥兒

    找工作從剃頭開始?梅西換新發型 秒變精神小夥兒

    2025-08-28 01:23

  • 尤文跟隊記者:巴爾紮利拒絕了加入囧叔教練組的邀請

    尤文跟隊記者:巴爾紮利拒絕了加入囧叔教練組的邀請

    2025-08-28 00:17

网友点评