[*] [-] [-] [x] [A+] [a-]  
[l] at 2/3/23 2:10pm
Enlarge Microsoft said on Friday that an Iranian nation-state group already sanctioned by the US government was behind an attack last month that targeted the satirical French magazine Charlie Hebdo and thousands of its readers. The attack came to light on January 4, when a previously unknown group calling itself Holy Souls took to the Internet to claim it had obtained a Charlie Hebdo database that contained personal information for 230,000 of its customers. The post said the database was available for sale at the price of 20 BTC, or roughly $340,000 at the time. The group also released a sample of the data that included the full names, telephone numbers, and home and email addresses of people who had subscribed to, or purchased merchandise from, the publication. French media confirmed the veracity of the leaked data. The release of the sample put the customers at risk of online targeting or physical violence by extremist groups, which have retaliated against Charlie Hebdo in recent years for its satirical treatment of matters pertaining to the Muslim religion and Islamic countries such as Iran. The retaliation included the 2015 shooting by two French Muslim terrorists and brothers at Charlie Hebdo offices that killed 12 and injured 11 others. To further gin up attention to the breached data, a flurry of fake personas—one falsely claiming to be a Charlie Hebdo editor—took to social media to discuss and publicize the leak.Read 10 remaining paragraphs | Comments

[Category: Biz & IT, charlie hebdo, Emennet Pasargad, Iran]

[*] [-] [-] [x] [A+] [a-]  
[l] at 2/3/23 9:29am
Enlarge (credit: Microsoft) If your main problem with the Microsoft Store is that you get too many relevant results when you search for apps, good news: Microsoft is officially launching Microsoft Store Ads, a way for developers to pay to get their apps in front of your eyes when you go to the store to look for something else. Microsoft's landing page for the feature says the apps will appear during searches and in the Apps and Gaming tabs within the app. Developers will be able to track whether and where users see the ads and whether they're downloading and opening the apps once they see the ads. Microsoft also provided an update on the health of the Microsoft Store, pointing to 2022 as "a record year," with more than 900 million unique users worldwide and "a 122% year-over-year increase in developer submissions of new apps and games." Microsoft has steadily loosened its restrictions on Store apps in the last year or two, allowing in traditional Win32 apps and also leaning on Amazon's Android app store and the Windows Subsystem for Android to expand its selection.Read 3 remaining paragraphs | Comments

[Category: Biz & IT, Tech, Microsoft Store, windows 11]

[*] [-] [-] [x] [A+] [a-]  
[l] at 2/3/23 6:29am
Enlarge (credit: Getty Images) Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries. “Threat researchers are used to seeing a moderate flow of malvertising via Google Ads,” volunteers at Spamhaus wrote on Thursday. “However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not ‘the norm.’” One of many new threats: MalVirt The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader. In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros. Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.Read 14 remaining paragraphs | Comments

[Category: Biz & IT, malicious ads, malvertising, malware]

[*] [-] [-] [x] [A+] [a-]  
[l] at 2/1/23 3:57pm
Enlarge / A realistic artist's depiction of an encounter with ChatGPT Plus. (credit: Benj Edwards / Ars Technica / OpenAI) On Wednesday, Reuters reported that AI bot ChatGPT reached an estimated 100 million active monthly users last month, a mere two months from launch, making it the "fastest-growing consumer application in history," according to a UBS investment bank research note. In comparison, TikTok took nine months to reach 100 million monthly users, and Instagram about 2.5 years, according to UBS researcher Lloyd Walmsley. “In 20 years following the Internet space, we cannot recall a faster ramp in a consumer internet app," Reuters quotes Walmsley as writing in the UBS note. Reuters says the UBS data comes from analytics firm Similar Web, which states that around 13 million unique visitors used ChatGPT every day in January, doubling the number of users in December.Read 3 remaining paragraphs | Comments

[Category: Biz & IT, AI, ChatGPT, ChatGPT Plus, GPT-3, large language models, machine learning, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 2/1/23 2:08pm
As many as 29,000 network storage devices manufactured by Taiwan-based QNAP are vulnerable to hacks that are easy to carry out and give unauthenticated users on the Internet complete control, a security firm has warned. The vulnerability, which carries a severity rating of 9.8 out of a possible 10, came to light on Monday, when QNAP issued a patch and urged users to install it. Tracked as CVE-2022-27596, the vulnerability makes it possible for remote hackers to perform a SQL injection, a type of attack that targets web applications that use the Structured Query Language. SQL injection vulnerabilities are exploited by entering specially crafted characters or scripts into the search fields, login fields, or URLs of a buggy website. The injections allow for the modifying, stealing, or deleting of data or the gaining of administrative control over the systems running the vulnerable apps. QNAP’s advisory on Monday said that network-attached storage devices running QTS versions before 5.0.1.2234 and QuTS Hero versions prior to h5.0.1.2248 were vulnerable. The post also provided instructions for updating to the patched versions.Read 8 remaining paragraphs | Comments

[Category: Biz & IT, NAS, network attached storage, QNAP, ransomware, vulnerabilities]

[*] [-] [-] [x] [A+] [a-]  
[l] at 2/1/23 2:00pm
Enlarge / A still image from the short film Dog and Boy,, which uses image synthesis to help generate background artwork. (credit: Netflix) Over the past year, generative AI has kicked off a wave of existential dread over potential machine-fueled job loss not seen since the advent of the industrial revolution. On Tuesday, Netflix reinvigorated that fear when it debuted a short film called Dog and Boy that utilizes AI image synthesis to help generate its background artwork. Directed by Ryotaro Makihara, the three-minute animated short follows the story of a boy and his robotic dog through cheerful times, although the story soon takes a dramatic turn toward the post-apocalyptic. Along the way, it includes lush backgrounds apparently created as a collaboration between man and machine, credited to "AI (+Human)" in the end credit sequence. A still image from the short film Dog and Boy that features AI-assisted background art. [credit: Netflix ] In the announcement tweet, Netflix cited an industry labor shortage as the reason for using the image synthesis technology:Read 6 remaining paragraphs | Comments

[Category: Biz & IT, Gaming & Culture]

[*] [+] [-] [x] [A+] [a-]  
[l] at 2/1/23 11:48am
Enlarge (credit: Getty) SSDs have usurped hard disk drives (HDDs) when it comes to performance, but whether building a network-attached storage (NAS) or having high-capacity needs on a budget, plenty of people still rely on spinning platters. Older drives that have seen a lot of use, however, may not be as reliable as before. Data Backblaze shared this week highlights how a hard drive's average failure rate (AFR) can increase with age. Since 2013, Backblaze, a backup and cloud storage company, has published an annual report analyzing the AFRs of hard drives in its data center. The 2022 report shared on Tuesday examines 230,921 hard drives across 29 models from HGST, Seagate, Toshiba, and WDC, with capacities ranging from 4–16TB. All models included at least 60 drives that were not previously used for testing. Keep in mind that the sample group only consists of drives that Backblaze had on hand, and they are of varying ages, with some used for more days than others. However, Backblaze's report does give us a unique look into the results of long-term hard drive use.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, Tech, Backblaze, hard drives, Storage]

[*] [+] [-] [x] [A+] [a-]  
[l] at 2/1/23 11:37am
Enlarge / An image from Stable Diffusion’s training set compared (left) to a similar Stable Diffusion generation (right) when prompted with "Ann Graham Lotz." (credit: Carlini et al., 2023) On Monday, a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich released a paper outlining an adversarial attack that can extract a small percentage of training images from latent diffusion AI image synthesis models like Stable Diffusion. It challenges views that image synthesis models do not memorize their training data and that training data might remain private if not disclosed. Recently, AI image synthesis models have been the subject of intense ethical debate and even legal action. Proponents and opponents of generative AI tools regularly argue over the privacy and copyright implications of these new technologies. Adding fuel to either side of the argument could dramatically affect potential legal regulation of the technology, and as a result, this latest paper, authored by Nicholas Carlini et al., has perked up ears in AI circles. However, Carlini's results are not as clear-cut as they may first appear. Discovering instances of memorization in Stable Diffusion required 175 million image generations for testing and preexisting knowledge of trained images. Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, adversarial AI, AI, AI ethics, Google Imagen, image synthesis, machine learning, privacy, Stable Diffusion]

[*] [-] [-] [x] [A+] [a-]  
[l] at 2/1/23 4:00am
Enlarge (credit: Aurich Lawson | Getty Images) In the past year, a new term has arisen to describe an online scam raking in millions, if not billions, of dollars per year. It’s called "pig butchering," and now even Apple is getting fooled into participating. Researchers from security firm Sophos said on Wednesday that they uncovered two apps available in the App Store that were part of an elaborate network of tools used to dupe people into putting large sums of money into fake investment scams. At least one of those apps also made it into Google Play, but that market is notorious for the number of malicious apps that bypass Google vetting. Sophos said this was the first it had seen such apps in the App Store and that a previous app identified in these types of scams was a legitimate one that was later exploited by bad actors Pig butchering relies on a rich combination of apps, websites, web hosts, and humans—in some cases human trafficking victims—to build trust with a mark over a period of weeks or months, often under the guise of a romantic interest, financial advisor, or successful investor. Eventually, the online discussion will turn to investments, usually involving cryptocurrency, that the scammer claims to have earned huge sums of money from. The scammer then invites the victim to participate.Read 20 remaining paragraphs | Comments

[Category: Biz & IT, App Store, apps, google play, scams]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/30/23 3:59pm
Enlarge GitHub said unknown intruders gained unauthorized access to some of its code repositories and stole code-signing certificates for two of its desktop applications: Desktop and Atom. Code-signing certificates place a cryptographic stamp on code to verify it was developed by the listed organization, which in this case is GitHub. If decrypted, the certificates could allow an attacker to sign unofficial versions of the apps that had been maliciously tampered with and pass them off as legitimate updates from GitHub. Current versions of Desktop and Atom are unaffected by the credential theft. “A set of encrypted code signing certificates were exfiltrated; however, the certificates were password-protected and we have no evidence of malicious use,” the company wrote in an advisory. “As a preventative measure, we will revoke the exposed certificates used for the GitHub Desktop and Atom applications.”Read 10 remaining paragraphs | Comments

[Category: Biz & IT, GitHub]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/30/23 3:43pm
Enlarge / An AI-generated image of an exploding ball of music. (credit: Ars Technica) On Thursday, researchers from Google announced a new generative AI model called MusicLM that can create 24 KHz musical audio from text descriptions, such as "a calming violin melody backed by a distorted guitar riff." It can also transform a hummed melody into a different musical style and output music for several minutes. MusicLM uses an AI model trained on what Google calls "a large dataset of unlabeled music," along with captions from MusicCaps, a new dataset composed of 5,521 music-text pairs. MusicCaps gets its text descriptions from human experts and its matching audio clips from Google's AudioSet, a collection of over 2 million labeled 10-second sound clips pulled from YouTube videos. Generally speaking, MusicLM works in two main parts: first, it takes a sequence of audio tokens (pieces of sound) and maps them to semantic tokens (words that represent meaning) in captions for training. The second part receives user captions and/or input audio and generates acoustic tokens (pieces of sound that make up the resulting song output). The system relies on an earlier AI model called AudioLM (introduced by Google in September) along with other components such as SoundStream and MuLan.Read 7 remaining paragraphs | Comments

[Category: Biz & IT, AI, google, machine learning, music synthesis, MusicLM]

[*] [-] [-] [x] [A+] [a-]  
[l] at 1/30/23 10:37am
Enlarge / The Russian logo of Yandex, the country's largest search engine and a tech company with many divisions, inside the company's headquarters. (credit: SOPA Images / Getty Images) Nearly 45GB of source code files, allegedly stolen by a former employee, have revealed the underpinnings of Russian tech giant Yandex's many apps and services. It also revealed key ranking factors for Yandex's search engine, the kind almost never revealed in public. The "Yandex git sources" were posted as a torrent file on January 25 and show files seemingly taken in July 2022 and dating back to February 2022. Software engineer Arseniy Shestakov claims that he verified with current and former Yandex employees that some archives "for sure contain modern source code for company services." Yandex told security blog BleepingComputer that "Yandex was not hacked" and that the leak came from a former employee. Yandex stated that it did not "see any threat to user data or platform performance." The files notably date to February 2022, when Russia began a full-scale invasion of Ukraine. A former executive at Yandex told BleepingComputer that the leak was "political" and noted that the former employee had not tried to sell the code to Yandex competitors. Anti-spam code was also not leaked.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, Tech, google, russia, search, SEO, source code, Ukraine, yandex]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/28/23 5:15am
Enlarge (credit: Eugene Mymrin/Getty Images) For years, the cryptocurrency economy has been rife with black market sales, theft, ransomware, and money laundering—despite the strange fact that in that economy, practically every transaction is written into a blockchain’s permanent, unchangeable ledger. But new evidence suggests that years of advancements in blockchain tracing and crackdowns on that illicit underworld may be having an effect—if not reducing the overall volume of crime, then at least cutting down on the number of laundering outlets, leaving the crypto black market with fewer options to cash out its proceeds than it’s had in a decade. In a portion of its annual crime report focused on money laundering that was published today, cryptocurrency-tracing firm Chainalysis points to a new consolidation in crypto criminal cash-out services over the past year. It counted just 915 of those services used in 2022, the fewest it’s seen since 2012 and the latest sign of a steady drop-off in the number of those services since 2018. Chainalysis says an even smaller number of exchanges now enable the money-laundering trade of cryptocurrency for actual dollars, euros, and yen: It found that just five cryptocurrency exchanges now handle nearly 68 percent of all black market cash-outs. Read 11 remaining paragraphs | Comments

[Category: Biz & IT, Policy, cryptocurrency, exchanges, ransomware, syndication, tor]

[*] [-] [-] [x] [A+] [a-]  
[l] at 1/27/23 12:39pm
Enlarge / An iteration of what happens when your site gets shut down by a DDoS attack. Threat actors loyal to the Kremlin have stepped up attacks in support of its invasion of Ukraine, with denial-of-service attacks hitting German banks and other organizations and the unleashing of a new destructive data wiper on Ukraine. Germany's BSI agency, which monitors cybersecurity in that country, said the attacks caused small outages but ultimately did little damage. “Currently, some websites are not accessible,” the BSI said in a statement to news agencies. “There are currently no indications of direct effects on the respective service and, according to the BSI's assessment, these are not to be expected if the usual protective measures are taken.”Read 7 remaining paragraphs | Comments

[Category: Biz & IT, DDoS, distributed denial of service attacks, germany, russia, Ukraine]

[*] [-] [-] [x] [A+] [a-]  
[l] at 1/27/23 11:10am
Enlarge / An AI-generated image of a robot typewriter-journalist hard at work. (credit: Ars Technica) On Thursday, an internal memo obtained by The Wall Street Journal revealed that BuzzFeed is planning to use ChatGPT-style text synthesis technology from OpenAI to create individualized quizzes and potentially other content in the future. After the news hit, BuzzFeed's stock rose 200 percent. On Friday, BuzzFeed formally announced the move in a post on its site. "In 2023, you'll see AI inspired content move from an R&D stage to part of our core business, enhancing the quiz experience, informing our brainstorming, and personalizing our content for our audience," BuzzFeed CEO Jonah Peretti wrote in a memo to employees, according to Reuters. A similar statement appeared on the BuzzFeed site. The move comes as the buzz around OpenAI's ChatGPT language model reaches a fever pitch in the tech sector, inspiring more investment from Microsoft and reactive moves from Google. ChatGPT's underlying model, GPT-3, uses its statistical "knowledge" of millions of books and articles to generate coherent text in numerous styles, with results that read very close to human writing, depending on the topic. GPT-3 works by attempting to predict the most likely next words in a sequence (called a "prompt") provided by the user.Read 6 remaining paragraphs | Comments

[Category: Biz & IT, AI, buzzfeed, ChatGPT, GPT-3, Jonah Peretti, large language model, machine learning, openai, Reuters, Stocks, Wall Street Journal]

[*] [-] [-] [x] [A+] [a-]  
[l] at 1/26/23 2:39pm
Enlarge / An example of computer-synthesized handwriting generated by Calligrapher.ai. (credit: Ars Technica) Thanks to a free web app called calligrapher.ai, anyone can simulate handwriting with a neural network that runs in a browser via JavaScript. After typing a sentence, the site renders it as handwriting in nine different styles, each of which is adjustable with properties such as speed, legibility, and stroke width. It also allows downloading the resulting faux handwriting sample in an SVG vector file. The demo is particularly interesting because it doesn't use a font. Typefaces that look like handwriting have been around for over 80 years, but each letter comes out as a duplicate no matter how many times you use it. During the past decade, computer scientists have relaxed those restrictions by discovering new ways to simulate the dynamic variety of human handwriting using neural networks.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, AI, deepfakes, handwriting, machine learning, neural networks, Sean Vasquez]

[*] [-] [-] [x] [A+] [a-]  
[l] at 1/25/23 6:15pm
Enlarge Three weeks ago, panic swept across some corners of the security world after researchers discovered a breakthrough that, at long last, put the cracking of the widely used RSA encryption scheme within reach by using quantum computing. Scientists and cryptographers have known for two decades that a factorization method known as Shor’s algorithm makes it theoretically possible for a quantum computer with sufficient resources to break RSA. That’s because the secret prime numbers that underpin the security of an RSA key are easy to calculate using Shor’s algorithm. Computing the same primes using classical computing takes billions of years. The only thing holding back this doomsday scenario is the massive amount of computing resources required for Shor’s algorithm to break RSA keys of sufficient size. The current estimate is that breaking a 1,024-bit or 2,048-bit RSA key requires a quantum computer with vast resources. Specifically, those resources are about 20 million qubits and about eight hours of them running in superposition. (A qubit is a basic unit of quantum computing, analogous to the binary bit in classical computing. But whereas a classic binary bit can represent only a single binary value such as a 0 or 1, a qubit is represented by a superposition of multiple possible states.)Read 12 remaining paragraphs | Comments

[Category: Biz & IT, encryption, Enigma, quantum computing]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/24/23 1:31pm
Enlarge / Nvidia's Eye Contact feature automatically maintains eye contact with a camera for you. (credit: Nvidia) Nvidia recently released a beta version of Eye Contact, an AI-powered software video feature that automatically maintains eye contact for you while on-camera by estimating and aligning gaze. It ships with the 1.4 version of its Broadcast app, and the company is seeking feedback on how to improve it. In some ways, the tech may be too good because it never breaks eye contact, which appears unnatural and creepy at times. To achieve its effect, Eye Contact replaces your eyes in the video stream with software-controlled simulated eyeballs that always stare directly into the camera, even if you're looking away in real life. The fake eyes attempt to replicate your natural eye color, and they even blink when you do. So far, the response to Nvidia's new feature on social media has been largely negative. "I too, have always wanted streamers to maintain a terrifying level of unbroken eye contact while reading text that obviously isn't displayed inside their webcams," wrote The D-Pad on Twitter.Read 3 remaining paragraphs | Comments

[Category: Biz & IT, AI, deepfake, Eye Contact, eyeballs, eyes, machine learning, NVIDIA, NVIDIA Broadcast, rtx]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/23/23 4:47pm
Enlarge / An illustration of a chatbot exploding onto the scene, being very threatening. (credit: Benj Edwards / Ars Technica) ChatGPT has Google spooked. On Friday, The New York Times reported that Google founders Larry Page and Sergey Brin held several emergency meetings with company executives about OpenAI's new chatbot, which Google feels could threaten its $149 billion search business. Created by OpenAI and launched in late November 2022, the large language model (LLM) known as ChatGPT stunned the world with its conversational ability to answer questions, generate text in many styles, aid with programming, and more. Google is now scrambling to catch up, with CEO Sundar Pichai declaring a “code red” to spur new AI development. According to the Times, Google hopes to reveal more than 20 new products—and demonstrate a version of its search engine with chatbot features—at some point this year.Read 9 remaining paragraphs | Comments

[Category: Biz & IT, AI, AI ethics, ChatGPT, google, GPT-3, lamda, large language model, larry page, machine learning, microsoft, openai, sergey brin]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/23/23 10:49am
Enlarge / The OpenAI logo superimposed over the Microsoft logo. (credit: Ars Technica) On Monday, AI tech darling OpenAI announced that it received a "multi-year, multi-billion dollar investment" from Microsoft, following previous investments in 2019 and 2021. While the two companies have not officially announced a dollar amount on the deal, the news follows rumors of a $10 billion investment that emerged two weeks ago. Founded in 2015, OpenAI has been behind several key technologies that made 2022 the year that generative AI went mainstream, including DALL-E image synthesis, the ChatGPT chatbot (powered by GPT-3), and GitHub Copilot for programming assistance. ChatGPT, in particular, has made Google reportedly "panic" to craft a response, while Microsoft has reportedly been working on integrating OpenAI's language model technology into its Bing search engine. “The past three years of our partnership have been great,” said Sam Altman, CEO of OpenAI, in a Microsoft news release. “Microsoft shares our values and we are excited to continue our independent research and work toward creating advanced AI that benefits everyone.”Read 3 remaining paragraphs | Comments

[Category: Biz & IT, agi, AI, azure, machine learning, microsoft, openai]

[*] [+] [-] [x] [A+] [a-]  
[l] at 1/23/23 10:40am
Enlarge / Holding up corporations, utilities, and hospitals for malware-encrypted data used to be quite profitable. But it's a tough gig lately, you know? (credit: ifanfoto/Getty Images) Two new studies suggest that ransomware isn't the lucrative, enterprise-scale gotcha it used to be. Profits to attackers' wallets, and the percentage of victims paying, fell dramatically in 2022, according to two separate reports. Chainalysis, a blockchain analysis firm that has worked with a number of law enforcement and government agencies, suggests in a blog post that based on payments to cryptocurrency addresses it has identified as connected to ransomware attacks, payments to attackers fell from $766 million in 2021 to $457 million last year. The firm notes that its wallet data does not provide a comprehensive study of ransomware; it had to revise its 2021 total upward from $602 for this report. But Chainalysis' data does suggest payments—if not attacks—are down since their pandemic peak. Chainalysis' data from ransomware wallets suggests a marked decrease in payments to attackers last year—though the number of attacks may not have declined so markedly. (credit: Chainalysis) Chainalysis' post also shows attackers switching between malware strains more quickly, and more known attackers are keeping their funds in mainstream cryptocurrency exchanges instead of the illicit and funds-mixing destinations that were more popular in ransomware boom times. This might look like a sign of a mature market with a higher cost of entry. But there's more to it than typical economics, Chainalysis suggests.Read 5 remaining paragraphs | Comments

[Category: Biz & IT, chainalysis, Coveware, crypto ransomware, cryptocurrency, ransomware, security]

As of 2/5/23 7:05am. Last new 2/3/23 4:52pm.

Next feed in category: Security Affairs