[*] [-] [-] [x] [A+] [a-]  
[l] at 9/9/24 3:46pm
Enlarge (credit: SOPA Images via Getty Images) On Friday, Roblox announced plans to introduce an open source generative AI tool that will allow game creators to build 3D environments and objects using text prompts, reports MIT Tech Review. The feature, which is still under development, may streamline the process of creating game worlds on the popular online platform, potentially opening up more aspects of game creation to those without extensive 3D design skills. Roblox has not announced a specific launch date for the new AI tool, which is based on what it calls a "3D foundational model." The company shared a demo video of the tool where a user types, "create a race track," then "make the scenery a desert," and the AI model creates a corresponding model in the proper environment. The system will also reportedly let users make modifications, such as changing the time of day or swapping out entire landscapes, and Roblox says the multimodal AI model will ultimately accept video and 3D prompts, not just text.Read 9 remaining paragraphs | Comments

[Category: AI, Biz & IT, Gaming, 3D foundational model, 3D synthesis, game synthesis, gaming, generative ai, machine learning, multiplayer games, online games, open source, roblox, world generator, world synthesis]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/6/24 2:23pm
Enlarge (credit: Getty Images) Researchers have discovered more than 280 malicious apps for Android that use optical character recognition to steal cryptocurrency wallet credentials from infected devices. The apps masquerade as official ones from banks, government services, TV streaming services, and utilities. In fact, they scour infected phones for text messages, contacts, and all stored images and surreptitiously send them to remote servers controlled by the app developers. The apps are available from malicious sites and are distributed in phishing messages sent to targets. There’s no indication that any of the apps were available through Google Play. A high level of sophistication The most notable thing about the newly discovered malware campaign is that the threat actors behind it are employing optical character recognition software in an attempt to extract cryptocurrency wallet credentials that are shown in images stored on infected devices. Many wallets allow users to protect their wallets with a series of random words. The mnemonic credentials are easier for most people to remember than the jumble of characters that appear in the private key. Words are also easier for humans to recognize in images.Read 9 remaining paragraphs | Comments

[Category: Biz & IT, Security, Uncategorized]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/6/24 12:31pm
Enlarge (credit: VGG | Getty Images) The cost of renting cloud services using Nvidia’s leading artificial intelligence chips is lower in China than in the US, a sign that the advanced processors are easily reaching the Chinese market despite Washington’s export restrictions. Four small-scale Chinese cloud providers charge local tech groups roughly $6 an hour to use a server with eight Nvidia A100 processors in a base configuration, companies and customers told the Financial Times. Small cloud vendors in the US charge about $10 an hour for the same setup. The low prices, according to people in the AI and cloud industry, are an indication of plentiful supply of Nvidia chips in China and the circumvention of US measures designed to prevent access to cutting-edge technologies.Read 19 remaining paragraphs | Comments

[Category: AI, Biz & IT, NVIDIA, nvidia ai, syndication]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/6/24 9:32am
Enlarge / "Hmm, no signal here. I'm trying to figure it out, but nothing comes to mind …" (credit: Getty Images) One issue in getting office buildings networked that you don't typically face at home is concrete—and lots of it. Concrete walls are an average of 8 inches thick inside most commercial real estate. Keeping a network running through them is not merely a matter of running cord. Not everybody has the knowledge or tools to punch through that kind of wall. Even if they do, you can't just put a hole in something that might be load-bearing or part of a fire control system without imaging, permits, and contractors. The bandwidths that can work through these walls, like 3G, are being phased out, and the bandwidths that provide enough throughput for modern systems, like 5G, can't make it through. That's what WaveCore, from Airvine Scientific, aims to fix, and I can't help but find it fascinating after originally seeing it on The Register. The company had previously taken on lesser solid obstructions, like plaster and thick glass, with its WaveTunnel. Two WaveCore units on either side of a wall (or on different floors) can push through a stated 12 inches of concrete. In their in-house testing, Airvine reports pushing just under 4Gbps through 12 inches of garage concrete, and it can bend around corners, even 90 degrees. Your particular cement and aggregate combinations may vary, of course.Read 2 remaining paragraphs | Comments

[Category: Biz & IT, Tech, airvine, airvine scientific, concrete, Networking, networking hardware, wavecore, wifi 6]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/5/24 2:54pm
Enlarge (credit: Getty Images) Federal prosecutors on Thursday unsealed an indictment charging six Russian nationals with conspiracy to hack into the computer networks of the Ukrainian government and its allies and steal or destroy sensitive data on behalf of the Kremlin. The indictment, filed in US District Court for the District of Maryland, said that five of the men were officers in Unit 29155 of the Russian Main Intelligence Directorate (GRU), a military intelligence agency of the General Staff of the Armed Forces. Along with a sixth defendant, prosecutors alleged, they engaged in a conspiracy to hack, exfiltrate data, leak information, and destroy computer systems associated with the Ukrainian government in advance of the Russian invasion of Ukraine in February 2022. Targeting critical infrastructure with WhisperGate The indictment, which supersedes one filed earlier, comes 32 months after Microsoft documented its discovery of a destructive piece of malware, dubbed WhisperGate, had infected dozens of Ukrainian government, nonprofit, and IT organizations. WhisperGate masqueraded as ransomware, but in actuality was malware that permanently destroyed computers and the data stored on them by wiping the master boot record—a part of the hard drive needed to start the operating system during bootup.Read 9 remaining paragraphs | Comments

[Category: Biz & IT, Security, GRU, indictments, Justice Department, kremlin]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/5/24 1:28pm
Enlarge AT&T filed a lawsuit against Broadcom on August 29 accusing it of seeking to “retroactively change existing VMware contracts to match its new corporate strategy.” The lawsuit, spotted by Channel Futures, concerns claims that Broadcom is not letting AT&T renew support services for previously purchased perpetual VMware software licenses unless AT&T meets certain conditions. Broadcom closed its $61 billion VMware acquisition in November and swiftly enacted sweeping changes. For example, in December, Broadcom announced the end of VMware perpetual license sales in favor of subscriptions of bundled products. Combined with higher core requirements per CPU subscription, complaints ensued that VMware was getting more expensive to work with. AT&T uses VMware software to run 75,000 virtual machines (VMs) across about 8,600 servers, per the complaint filed at the Supreme Court of the State of New York [PDF]. It reportedly uses the VMs to support customer service operations and for operations management efficiency.Read 14 remaining paragraphs | Comments

[Category: Biz & IT, AT&T, Broadcom, vmware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/5/24 9:02am
Enlarge (credit: anilyanik via Getty Images) On Wednesday, federal prosecutors charged a North Carolina musician with defrauding streaming services of $10 million through an elaborate scheme involving AI, as reported by The New York Times. Michael Smith, 52, allegedly used AI to create hundreds of thousands of fake songs by nonexistent bands, then streamed them using bots to collect royalties from platforms like Spotify, Apple Music, and Amazon Music. While the AI-generated element of this story is novel, Smith allegedly broke the law by setting up an elaborate fake listener scheme. The US Attorney for the Southern District of New York, Damian Williams, announced the charges, which include wire fraud and money laundering conspiracy. If convicted, Smith could face up to 20 years in prison for each charge. Smith's scheme, which prosecutors say ran for seven years, involved creating thousands of fake streaming accounts using purchased email addresses. He developed software to play his AI-generated music on repeat from various computers, mimicking individual listeners from different locations. In an industry where success is measured by digital listens, Smith's fabricated catalog reportedly managed to rack up billions of streams.Read 4 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI music generator, audio synthesis, fraud, frosted prick, machine learning, music synthesis, royalties, scam, scheme, Streaming music]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/4/24 3:38pm
Enlarge (credit: Jorg Greuel via Getty Images) Over the weekend, the nonprofit National Novel Writing Month organization (NaNoWriMo) published an FAQ outlining its position on AI, calling categorical rejection of AI writing technology "classist" and "ableist." The statement caused a backlash online, prompted four members of the organization's board to step down, and prompted a sponsor to withdraw its support. "We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology," wrote NaNoWriMo, "and that questions around the use of AI tie to questions around privilege." NaNoWriMo, known for its annual challenge where participants write a 50,000-word manuscript in November, argued in its post that condemning AI would ignore issues of class and ability, suggesting the technology could benefit those who might otherwise need to hire human writing assistants or have differing cognitive abilities.Read 14 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI accessibility, AI and disabilities, annual writing event, ChatGPT, chatgtp, Chuck Wendig, Claire Silver, FAQ, machine learning, NaNoWriMo, no novel november]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/4/24 12:57pm
Enlarge (credit: Getty Images) Networking hardware-maker Zyxel is warning of nearly a dozen vulnerabilities in a wide array of its products. If left unpatched, some of them could enable the complete takeover of the devices, which can be targeted as an initial point of entry into large networks. The most serious vulnerability, tracked as CVE-2024-7261, can be exploited to “allow an unauthenticated attacker to execute OS commands by sending a crafted cookie to a vulnerable device,” Zyxel warned. The flaw, with a severity rating of 9.8 out of 10, stems from the “improper neutralization of special elements in the parameter ‘host’ in the CGI program” of vulnerable access points and security routers. Nearly 30 Zyxel devices are affected. As is the case with the remaining vulnerabilities in this post, Zyxel is urging customers to patch them as soon as possible. But wait... there’s more The hardware manufacturer warned of seven additional vulnerabilities affecting firewall series including the ATP, USG-FLEX, and USG FLEX 50(W)/USG20(W)-VPN. The vulnerabilities carry severity ratings ranging from 4.9 to 8.1. The vulnerabilities are:Read 9 remaining paragraphs | Comments

[Category: Biz & IT, Security, network hardware, patches, vulnerabilities, zyxel]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/4/24 10:42am
Enlarge / Ilya Sutskever, OpenAI Chief Scientist, speaks at Tel Aviv University on June 5, 2023. (credit: JACK GUEZ via Getty Images) On Wednesday, Reuters reported that Safe Superintelligence (SSI), a new AI startup cofounded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in funding. The 3-month-old company plans to focus on developing what it calls "safe" AI systems that surpass human capabilities. The fundraising effort shows that even amid growing skepticism around massive investments in AI tech that so far have failed to be profitable, some backers are still willing to place large bets on high-profile talent in foundational AI research. Venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel participated in the SSI funding round. SSI aims to use the new funds for computing power and attracting talent. With only 10 employees at the moment, the company intends to build a larger team of researchers across locations in Palo Alto, California, and Tel Aviv, Reuters reported.Read 6 remaining paragraphs | Comments

[Category: AI, Biz & IT, agi, AI models, artificial superintelligence, ASI, Ilya Sutskever, large langauge models, machine learning, openai, Palo Alto, Reuters, Safe Superintelligence, SSI, Tel Aviv]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/3/24 1:59pm
Enlarge / An ABC handout promotional image for "AI and the Future of Us: An Oprah Winfrey Special." (credit: ABC) On Thursday, ABC announced an upcoming TV special titled, "AI and the Future of Us: An Oprah Winfrey Special." The one-hour show, set to air on September 12, aims to explore AI's impact on daily life and will feature interviews with figures in the tech industry, like OpenAI CEO Sam Altman and Bill Gates. Soon after the announcement, some AI critics began questioning the guest list and the framing of the show in general. "Sure is nice of Oprah to host this extended sales pitch for the generative AI industry at a moment when its fortunes are flagging and the AI bubble is threatening to burst," tweeted author Brian Merchant, who frequently criticizes generative AI technology in op-eds, social media, and through his "Blood in the Machine" AI newsletter. "The way the experts who are not experts are presented as such what a train wreck," replied artist Karla Ortiz, who is a plaintiff in a lawsuit against several AI companies. "There’s still PLENTY of time to get actual experts and have a better discussion on this because yikes."Read 10 remaining paragraphs | Comments

[Category: AI, Biz & IT, ABC, AI criticism, AI lawsuit, Bill Gates, Brian Merchant, ChatGPT, chatgtp, Dr. Margaret Mitchell, Ed Zitron, image synthesis, Karla Ortiz, large language models, machine learning, openai, Oprah Winfrey, sam altman]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/3/24 1:38pm
Enlarge / Rust never sleeps. But Rust, the programming language, can be held at bay if enough kernel programmers aren't interested in seeing it implemented. (credit: Getty Images) The Linux kernel is not a place to work if you're not ready for some, shall we say, spirited argument. Still, one key developer in the project to expand Rust's place inside the largely C-based kernel feels the "nontechnical nonsense" is too much, so he's retiring. Wedson Almeida Filho, a leader in the Rust for Linux project, wrote to the Linux kernel mailing list last week to remove himself as the project's maintainer. "After almost 4 years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it's best to leave it up to those who still have it in them," Filho wrote. While thanking his teammates, he noted that he believed the future of kernels "is with memory-safe languages," such as Rust. "I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix," Filho wrote. Filho also left a "sample for context," a link to a moment during a Linux conference talk in which an off-camera voice, identified by Filho in a Register interview as kernel maintainer Ted Ts'o, emphatically interjects: "Here's the thing: you're not going to force all of us to learn Rust." In the context of Filho's request that Linux's file system implement Rust bindings, Ts'o says that while he knows he must fix all the C code for any change he makes, he cannot or will not fix the Rust bindings that may be affected.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Tech, asahi lina, asahi linux, C language, kernel development, Linus Torvalds, Linux, Linux kernel, rust, rust language, Wedson Almeida Filho]

[*] [+] [-] [x] [A+] [a-]  
[l] at 9/3/24 11:58am
Enlarge (credit: Yubico) The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains temporary physical access to it, researchers said Tuesday. The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability. Patching not possible YubiKey-maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse-engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.Read 15 remaining paragraphs | Comments

[Category: Biz & IT, Security, 2fa, cryptography, ecdsa, elliptic curve digital signature algorithm, encryption, side channels, two-factor authentication]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/30/24 2:00pm
Enlarge (credit: Getty Images) A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials. The order, issued by a judge in Ohio's Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city’s data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group’s dark web site, which is accessible to anyone with a TOR browser. Dark web not readily available to public—really? Columbus Mayor Andrew Ginther said on August 13 that a “breakthrough” in the city’s forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them “unusable” to the thieves. Ginther went on to say the data’s lack of integrity was likely the reason the ransomware group had been unable to auction off the data.Read 9 remaining paragraphs | Comments

[Category: Biz & IT, Policy, Security, lawsuits, personal information, ransomware]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/30/24 11:21am
Enlarge (credit: Benj Edwards / Getty Images) On Thursday, OpenAI said that ChatGPT has attracted over 200 million weekly active users, according to a report from Axios, doubling the AI assistant's user base since November 2023. The company also revealed that 92 percent of Fortune 500 companies are now using its products, highlighting the growing adoption of generative AI tools in the corporate world. The rapid growth in user numbers for ChatGPT (which is not a new phenomenon for OpenAI) suggests growing interest in—and perhaps reliance on— the AI-powered tool, despite frequent skepticism from some critics of the tech industry. "Generative AI is a product with no mass-market utility—at least on the scale of truly revolutionary movements like the original cloud computing and smartphone booms," PR consultant and vocal OpenAI critic Ed Zitron blogged in July. "And it’s one that costs an eye-watering amount to build and run."Read 9 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI critics, AI prohibition, AI stigma, axios, ChatGPT, chatgtp, Ed Zitron, Ethan Mollick, machine learning, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/29/24 3:05pm
Enlarge (credit: Getty Images) Critics of spyware and exploit sellers have long warned that the advanced hacking sold by commercial surveillance vendors (CSVs) represents a worldwide danger because they inevitably find their way into the hands of malicious parties, even when the CSVs promise they will be used only to target known criminals. On Thursday, Google analysts presented evidence bolstering the critique after finding that spies working on behalf of the Kremlin used exploits that are “identical or strikingly similar” to those sold by spyware makers Intellexa and NSO Group. The hacking outfit, tracked under names including APT29, Cozy Bear, and Midnight Blizzard, is widely assessed to work on behalf of Russia’s Foreign Intelligence Service, or the SVR. Researchers with Google’s Threat Analysis Group, which tracks nation-state hacking, said Thursday that they observed APT29 using exploits identical or closely identical to those first used by commercial exploit sellers NSO Group of Israel and Intellexa of Ireland. In both cases, the Commercial Surveillance Vendors’ exploits were first used as zero-days, meaning when the vulnerabilities weren’t publicly known and no patch was available. Identical or strikingly similar Once patches became available for the vulnerabilities, TAG said, APT29 used the exploits in watering hole attacks, which infect targets by surreptitiously planting exploits on sites they’re known to frequent. TAG said APT29 used the exploits as n-days, which target vulnerabilities that have recently been fixed but not yet widely installed by users.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, Security]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/28/24 3:25pm
Enlarge (credit: Getty Images) Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices. The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day. That time a ragtag army shook the Internet Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Security, exploits, Internet of things, mirai, vulnerabilities, webcams]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/28/24 11:15am
Enlarge / Does Mono fit between the Chilean cab sav and Argentinian malbec, or is it more of an orange, maybe? (credit: Getty Images) Microsoft has donated the Mono Project, an open-source framework that brought its .NET platform to non-Windows systems, to the Wine community. WineHQ will be the steward of the Mono Project upstream code, while Microsoft will encourage Mono-based apps to migrate to its open source .NET framework. As Microsoft notes on the Mono Project homepage, the last major release of Mono was in July 2019. Mono was "a trailblazer for the .NET platform across many operating systems" and was the first implementation of .NET on Android, iOS, Linux, and other operating systems. Ximian, Novell, SUSE, Xamarin, Microsoft—now Wine Mono began as a project of Miguel de Icaza, co-creator of the GNOME desktop. De Icaza led Ximian (originally Helix Code), aiming to bring Microsoft's then-new .NET platform to Unix-like platforms. Ximian was acquired by Novell in 2003.Read 4 remaining paragraphs | Comments

[Category: Biz & IT, Tech, .NET, microsoft, Mono, Wine]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/28/24 11:06am
Enlarge (credit: Aurich Lawson | Getty Images) On Tuesday, researchers from Google and Tel Aviv University unveiled GameNGen, a new AI model that can interactively simulate the classic 1993 first-person shooter game Doom in real time using AI image generation techniques borrowed from Stable Diffusion. It's a neural network system that can function as a limited game engine, potentially opening new possibilities for real-time video game synthesis in the future. For example, instead of drawing graphical video frames using traditional techniques, future games could potentially use an AI engine to "imagine" or hallucinate graphics in real time as a prediction task. "The potential here is absurd," wrote app developer Nick Dobos in reaction to the news. "Why write complex rules for software by hand when the AI can just think every pixel for you?"Read 19 remaining paragraphs | Comments

[Category: AI, Biz & IT, Gaming, Doom, game synthesis, gaming, google deepmind, Google research, id Software, image synthesis, John Carmack, john romero, machine learning, neural rendering, Stable Diffusion]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/27/24 3:29pm
Enlarge (credit: Getty) Indian IT firm Infosys has been accused of being “exploitative” after allegedly sending job offers to thousands of engineering graduates but still not onboarding any of them after as long as two years. The recent graduates have reportedly been told they must do repeated unpaid training in order to remain eligible to work at Infosys. Last week, the Nascent Information Technology Employees Senate (NITES), an Indian advocacy group for IT workers, sent a letter [PDF], shared by The Register, to Mansukh Mandaviya, India’s minister of Labor and Employment. It requested that the Indian government intervene “to prevent exploitation of young IT graduates by Infosys." The letter, signed by NITES president Harpreet Singh Saluja, claimed that NITES received “multiple” complaints from recent engineering graduates “who have been subjected to unprofessional and exploitative practices” from Infosys after being hired for system engineer and digital specialist engineer roles. According to NITES, Infosys sent these people offer letters as early as April 22, 2022, after engaging in a college recruitment effort from 2022–2023, but never onboarded the graduates. NITES has previously said that “over 2,000 recruits” are affected.Read 11 remaining paragraphs | Comments

[Category: Biz & IT, india, infosys, jobs]

[*] [+] [-] [x] [A+] [a-]  
[l] at 8/27/24 2:07pm
Enlarge / A man peers over a glass partition, seeking transparency. (credit: Image Source via Getty Images) The Open Source Initiative (OSI) recently unveiled its latest draft definition for "open source AI," aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the "open source" label. This has sparked intense debates among free-software advocates about what truly constitutes "open source" in the context of AI. For instance, Meta's Llama 3 model, while freely available, doesn't meet the traditional open source criteria as defined by the OSI for software because it imposes license restrictions on usage due to company size or what type of content is produced with the model. The AI image generator Flux is another "open" model that is not truly open source. Because of this type of ambiguity, we've typically described AI models that include code or weights with restrictions or lack accompanying training data with alternative terms like "open-weights" or "source-available." To address the issue formally, the OSI—which is well-known for its advocacy for open software standards—has assembled a group of about 70 participants, including researchers, lawyers, policymakers, and activists. Representatives from major tech companies like Meta, Google, and Amazon also joined the effort. The group's current draft (version 0.0.9) definition of open source AI emphasizes "four fundamental freedoms" reminiscent of those defining free software: giving users of the AI system permission to use it for any purpose without permission, study how it works, modify it for any purpose, and share with or without modifications.Read 14 remaining paragraphs | Comments

[Category: AI, Biz & IT, AI regulation, AI workshops, All Things Open, ChatGPT, chatgtp, flux, free software, GPT-4o, machine learning, meta, open source, open source AI, Open Source Initiative, open weights, Open weights AI, openai, OSI, Raleigh, SB-1047, source available]

As of 9/9/24 11:12pm. Last new 9/9/24 4:49pm.

Next feed in category: Tech News World