본문 바로가기

미국증시 중요 뉴스정리

While the acquisition of technology start-ups by big tech companies in the U.S.

반응형

Google Gemini Ultra launches! While the acquisition of technology start-ups by big tech companies in the U.S. has been on the decline for the past year and a half, the artificial intelligence sector seems to be as hot as it gets.

Apple, Google, Meta, and Microsoft are actively acquiring companies in the field of artificial intelligence.

Google's Gemini Ultra, which claims to have exceeded GPT-4's performance, is finally on public display.

In addition, we changed Bard to Gemini and released a paid version (29,000 won/month) with Gemini Ultra.

Gemini Advanced uses a high-performance Ultra version and can be applied to Google Docs and spreadsheets to make documentation easier.

In addition, Gemini will also be integrated into the Google app, making it as easy to use as MS Copilot (which is now gradually rolling out)

Google is finally catching up with Microsoft. Will Microsoft or OpenAI be able to overtake Microsoft once again? If so, how will they overtake Microsoft?

I've been bored for the past few weeks(?) but I'm looking forward to it being an exciting fight again. 😎🍿🥤Google PER Chart. I don't understand the valuation of Kakao or Naver. They say that platform companies like Kakao can give higher PERs because of their infinite scalability and winner-take-all structure, but isn't it right to buy Google that is cheaper and more scalable? I think they're selling drugs to some extent.

Is scalability more than Google? No
Has technology gone up to Google's level? NO (Actually impossible)
Is the management as smart as Google? NO
Is the ability to generate cash as good as Google? NO
Is there a business as innovative as Google in the pipeline? NO
Are You As Aggressive As Google To Return Shareholders? NO Van Boom On And AMD (Director's Analysts)
Maso and Nvidia-Arm, a rebalancing of the memory market.

These days, there are a lot of cases where I just ignore the beauty and just use it as if I'm running,
Currently, the amd-Big Tech coalition is challenging
NVIDIA, you're really sick of it.
d. I want to write about the rebellion structure because it's dirty and cheap.

For Hwang Shin (N.B.C.), who Buffett or Munger ignore
There are two powerful moats.
CUDA IP stack, Interconnect
Since the early days when computer graphics were first created, AI models have been designed based on Cuda, so developers and researchers have no choice but to design on the CUDA platform.
In addition, CUDA can be developed using basic coding languages such as Python and C languages.

NVIDIA said it's a competitor, so compared to the A100
It literally slaughtered other players' chips by launching the H100, which is 3x more efficient.

Generally speaking, one 7,500-dollar server is
It consists of cpu 2100/drama 1400/SSD(nand) 1400$/etc.
_jp morgan Server BOM COST 참조

But the AI server with the GPU of our Hwangshin Nvidia
CPU, DRAM, SSD also have a lot of high specification
NVDIA's $10,000 A100, $30,000 each with eight.

Therefore, NVDAI's AI chip is currently divided into two types
General purpose A100 chips sell for $115,000, double
Envy GPUs cost $80,000-85,000 mostly. (Largely 80%)
[It's over 100 million for one chip, 80% of them are produced by N.B]

For professional H100 chips, it sells for $243,000, of which Nvidia's GPU price ranges from $200,000 to 210,000 (approximately 80%)
[Professional - 200 million won for 1 chip, 80% produced by N.B]

However, it was very difficult to see Nvidia's growth as an anti-boom-on. Nvidia announced that it would further reduce Kepa for cpu memory as it goes to A100 > H100 > hypermodels.
In fact, memory doesn't need to be used much on Envy's chips.

In addition, as one of the things that everyone swears about in Compuzone or Dana in this Envy, Envy limits memory usage rather than how to upgrade the server's performance and puts a limit on memory usage to sell Nvidia's GPU at a high price.

In other words, unlike before, H100 has to be purchased to be designed to experience performance improvement.

It was crazy and jumping from the perspective of customers such as Big Tech, whose stock price is dull due to marketing with AI, but AMD's earnings in the second quarter were so fast that Arm and Maso made one announcement just before -10%, and money began to come into Korean semiconductors starting with JPM.

AMD's Mi 300 will be developed as a next-generation AI chip like Maso,
Amazon, Google, and Meta are also welcome. Nice to meet you. ㅠㅠ펼입장.
This shows how much Envy's monopolistic tyranny was like from Big Tech's point of view,
While developing Mi 300, it announced that it would sell at a 70% discount to Hwangvidia's MSFT-only H100 after earnings killing in the second quarter.
(Thanks to this, I don't know about the performance. Envy Kepa is taking home Happy Rally, and the stock price skyrockets.)

Unlike Nvidia, which used NVLink to design chips
AMD just pasted the GPUs, which is a way to increase memory bandwidth.
To put it simply
Does NVDIA's AI chip have memory capacity? No boss? It's enough if we use GPU. LOL Taiwan doesn't need Korean semiconductors.
AMD's ai chips make it easier to create an ai server with a large increase in memory (successful mall?ru?)

So from the perspective of customers who were persecuted under the tyranny of Hwangshin
When MSFT-AMD's ai chip is commercialized, the more memory the customer puts in, the more the ai server can be expanded.
It can also improve performance cheaply because it is cheaper than Envy.

Numerous AI applications are currently being developed as in the metaverse, and the training needles of the AI data center have been in the form of NVDIA monopoly until now. > NVDIA's AI chips have very limited memory demand and are useless.
> In the midst of this, MSFT, who couldn't stand Hwangshin's price tyranny, joined hands with AMD first. > AMD's chiplet technology improves yield as memory is pounded. > If AMD's chip succeeds and is commercialized, memory usage such as DRAM and NAND flash increases. > Why foreigners are buying into director Hanik, Samjeon, and semiconductors after the announcement of amd performance.

Conclusion: Director semiconductors will rise only if AMD succeeds.

This can be confirmed by DDR and DRAM futures prices, and after the amd-MSFT event, the DRAM futures price rebounded with very strong downward support.
What if AMD fails again...? Let's not say that... [Techsuda - AI] Will open-source LLM be a drag on Nvidia's moves?... Probably not ^.^

Nvidia is unique in learning.
It is taking steps to eat the reasoning area in earnest.

CPU-based companies such as Intel and AMD are also releasing chips for learning, but they are still not enough to overcome the software ecosystem called 'Kuda' that has been created for more than a decade.

Most of the chips created by cloud service providers are inference chips, and most of them, called AI startups, are also inference chips.

Nvidia's influence has grown beyond recognition as giant language models are pouring out at the same time.

However, Meta's open source, such as Lambda 2, now allows LLM to be secured without a large-scale learning infrastructure. Of course, it is not enough compared to OpenAI's Chatgpt 4 or 'Genemi', which is made by combining Deep Mind and Google Brain, but it is not bad to start small rather than making excessive investments from the beginning.

Infrastructure is expensive and the price of AI PhD majors is high.

It looks like something that has always been ruined in Korean projects. I think it's perfect for me to step up first and lose my seat.

If saving the H100 is like picking a star in the sky anyway, the option is to use a cloud service provider or just close your eyes and cooperate with Intel or AMD. Internal backlash may be scary, but they are throwing Intel at the generation in their 40s and 50s who will stay in the field for a very long time, saying, "Let's be slim and long friends." ^.^

Low infrastructure, relatively low labor costs, then wouldn't it be possible to make the service relatively cheap? Cheap doesn't mean cheap service is cheap.

I already know that I won't be able to beat Nvidia.

However, the market does not want to have more than 90 percent of influence. It can fall to at least 50 percent. Of course, the AI market will grow that much.

Gartner, a market research firm, predicted that global AI chip sales will reach $53.4 billion in 2023.

AI Chip Sales Expected to More Than Double Through 2027… Growth Driving Growth With Extensive AI-Based Application Utilization in Data Centers, Edge Infrastructure, and Endpoint Devices

The prospect of increasing deployment of custom-designed AI chips that are effective and cost-effective … will replace individual GPUs, the main chip architecture currently in use.

Thinking about why Microsoft didn't make such a chip quickly, I think it's because the open AI figures who joined hands chose a universal Nvidia GPU to proceed with the business. First, using the infrastructure to apply AI copilot to all of its products and services, and then prepare to spread the service by jointly creating the necessary semiconductors.

In the meantime, Intel and AMD will also prepare to some extent.

From the cloud operator's point of view, it is enough to create various instances of substitute goods and give customers a choice.

If you look at Microsoft's recent moves, we've finished preparing for open AI and our company to some extent, so we're going to make Meta LLM open source as an internal service product, and we're going to go one step further and release DataBlix LLM as a cloud service product.

AW

320x100