Category: GlobeNewswire

  • MIL-OSI: Waldencast Reports Q4 2024 and Fiscal Year 2024 Financial Results

    Source: GlobeNewswire (MIL-OSI)

    Q4 Net Revenue of $72.1 million, 29.4% Comparable Net Revenue Growth and $11.2 million of Adjusted EBITDA, doubling from Q4 2023

    FY 2024 Net Revenue of $273.9 million, 27.5% Comparable Net Revenue Growth and $40.3 million of Adjusted EBITDA

    Obagi Medical is the fastest growing professional skincare brand1 in the US in 2024

    Milk Makeup expands its distribution to Ulta Beauty

    Waldencast secures a new $205 million credit facility, replacing the current one, enhancing flexibility and extending debt maturity

    LONDON, March 18, 2025 (GLOBE NEWSWIRE) — Waldencast plc (NASDAQ: WALD) (“Waldencast” or the “Company”), a global multi-brand beauty and wellness platform, today reported operating results for the three months ended December 31, 2024 (“Q4 2024”) and the year ended December 31, 2024 (the “Year Ended 2024”) on Form 6-K to the U.S. Securities and Exchange Commission (the “SEC”), which are also available on our investor relations site at http://ir.waldencast.com/.

    Michel Brousset, Waldencast Founder and CEO, said: “We closed a transformative year for the Group, achieving outstanding growth, expanding our brands’ communities, and making significant progress on our strategic priorities. Our business model is driven by a powerful flywheel effect of growth and profitability. This begins with the unique strength of our brands, which is amplified by our ability to enhance operational efficiency. As a result, we can effectively increase investments in sales and marketing to drive profitable growth. In 2024, we achieved a 27.5% increase in Comparable Net Revenue and a 65.1% rise in Adjusted EBITDA, demonstrating our proven ability to expand gross margins and optimize our cost base as we grow.”

    “Our proven ability to innovate significantly contributed to our brands’ growth. This year, Milk Makeup introduced several exciting new products, including the viral and award-winning Cooling Water Jelly Tint Blush + Lip Stain. Obagi Medical also launched a range of successful innovations aimed at both consumers and the professional skincare medical community, most notably the ELASTIderm Lift Up & Sculpt Facial Moisturizer and Elastiderm Advanced Filler Concentrate.”

    “Building on our momentum, we are excited to announce that Milk Makeup will launch in over 600 Ulta Beauty locations this spring, further highlighting the growing demand for our cult-favorite brand. Additionally, Obagi Medical expanded the Suzan Obagi MD® collection with groundbreaking new products, including the Super Antioxidant Serum and the Moisture Restore Hydration Replenishing Cream.”

    ____________________________________

    1 Among the top 10 brands. Kline & Company. (2024). 2024 Kline Professional Skincare: United States market analysis and opportunities.

    “Overall, we are excited about the year ahead and expect another year of significant milestones toward achieving our ambition to build a global best-in-class beauty and wellness multi-brand platform by creating, acquiring, accelerating, and scaling the next generation of high-growth, purpose-driven brands,” concluded Mr. Brousset.

    Q4 2024 Results Overview

    Please refer to the definitions and reconciliations set out further in this release with respect to certain adjusted non-GAAP measures discussed below which are included to provide an easier understanding of the underlying performance of the business, but should not be seen as a substitute for the U.S. GAAP numbers presented in this release.

    For the three months ended December 31, 2024 compared to the three months ended December 31, 2023:

    Net Revenue increased 30.8% to $72.1 million, a 29.4% increase in Comparable Net Revenue Growth that was attributable to Milk Makeup channel expansion, Obagi Medical accelerated growth in the Physician Dispense channel, and continued success in Obagi Medical e-commerce channels.

    Gross Profit was $49.4 million. Adjusted Gross Profit was $52.6 million, or 73.0% of net revenue, compared to $40.3 million in Q4 2023.

    Net Loss improved from $32.7 million in Q4 2023 to $22.6 million in Q4 2024, driven by operational growth and a reduction in non-recurring costs associated with the restatement and SEC investigation.

    Adjusted EBITDA doubled to $11.2 million (15.5% of net revenue), reflecting a 530 basis point expansion from Q4 2023. This growth was driven by strong top-line performance and operational leverage, as both Obagi Medical and Milk Makeup continued to scale and reinvest in business drivers while maintaining G&A discipline.

    Liquidity: The business maintained strong cash conversion in Q4 2024, driven by effective working capital management and minimal capital expenditure thanks to our asset-light business model. While the Company continues to incur significant non-recurring legal and advisory costs, the level of expenditures has been gradually reducing. As of December 31, 2024, the Company had $14.8 million in cash and cash equivalents and $154.2 million of Net Debt.

    New Credit Facility: Waldencast has entered into a new $205 million five-year credit facility, comprising a $175 million Term Loan and a $30 million RCF, that replaces its existing facility. This agreement supports the Company’s strategic priorities by enhancing financial flexibility and extending its debt maturity profile well ahead of the current facilities expiration in July 2026.

    Outstanding Shares: As of February 28, 2025, we had 122,720,911 ordinary shares outstanding, consisting of 112,054,383 Class A shares and 10,666,528 Class B shares. As of December 31, 2024, we had 122,692,968 ordinary shares outstanding, consisting of 112,026,440 Class A shares and 10,666,528 Class B shares.

    (In $ millions, except for percentages)   Q4 2024   % Sales   % Growth   % Comparable
    Net Revenue
    Growth
        Q4 2023   % Sales
    Waldencast                          
    Net Revenue   72.1   100.0%   30.8%   29.4%     55.1   100.0%
    Adjusted Gross Profit   52.6   73.0%   30.7%         40.3   73.1%
    Adjusted EBITDA   11.2   15.5%   99.3%         5.6   10.2%
                               
    Obagi Medical                          
    Net Revenue   42.2   100.0%   30.0%   27.7%     32.5   100.0%
    Adjusted Gross Profit   33.2   78.7%   28.0%         26.0   80.0%
    Adjusted EBITDA   9.8   23.3%   23.7%         8.0   24.5%
                               
    Milk Makeup                          
    Net Revenue   29.9   100.0%   31.9%         22.6   100.0%
    Adjusted Gross Profit   19.4   64.9%   35.6%         14.3   63.1%
    Adjusted EBITDA   4.8   16.1%   248.0%         1.4   6.1%
     

    Fourth Quarter 2024 Brand Highlights:

    Obagi Medical:

    • Net Revenue reached $42.2 million, from $32.5 million in Q4 2023 with Comparable Net Revenue Growth of 27.7%.
    • Obagi Medical’s strong net revenue growth continued to be driven by increased brand awareness, stronger selling and marketing investments, and continued innovation. The brand continued expanding its international footprint and growing e-commerce sales through its direct website and the move to a first party model with its main e-commerce distributor, implemented in late 2023, with benefits tapering off by Q1 2025.
    • Notably, Obagi Medical was the fastest-growing professional skin care brand among the top 10 in the US in 20241. This historic achievement underscores the strength of our enhanced go-to-market strategy which successfully balances growth in the Physician Dispense channel, our historic stronghold, with the acceleration of our digital channels.
    • Adjusted Gross Margin of 78.7% contracted 130 basis points from Q4 2023 due to a higher weight of inventory liquidations.
    • Adjusted EBITDA was $9.8 million, an increase of 23.7% from Q4 2023 with an Adjusted EBITDA margin of 23.3%, a decline of 120 basis points from Q4 2023 reflecting the brands continued strategic investment in marketing to drive top-line growth and improved leverage of fixed costs.

    Milk Makeup:

    • Net Revenue reached $29.9 million, up 31.9% from $22.6 million in Q4 2023.
    • Milk Makeup’s Q4 2024 growth reflected the initial shipments to Ulta Beauty in support of the brand’s spring 2025 launch along with increased demand driven by our growing awareness, the continued delivery of sought-after innovation, and international expansion.
    • Adjusted Gross Margin increased by 180 basis points versus Q4 2023, primarily reflecting the positive impact of channel and product mix, as well as margin accretive innovation.
    • Adjusted EBITDA was $4.8 million an increase of $3.4 million from Adjusted EBITDA of $1.4 million in Q4 2023. Adjusted EBITDA Margin improved 1,000 basis points to 16.1% versus 6.1% in Q4 2023 as robust sales growth and gross margin expansion drove significant operational leverage despite increased brand investment.

    Year Ended 2024 Results Overview

    For the year ended December 31, 2024 compared to the year ended December 31, 2023:

    Net Revenue was $273.9 million, a 27.5% increase in Comparable Net Revenue Growth.

    Gross Profit was $191.7 million. Adjusted Gross Profit was $203.6 million, or 74.3% of net revenue, a margin improvement of 530 basis points versus 2023.

    Net Loss was $48.6 million, down from $106.0 million in the Year Ended 2023. The improvement was primarily driven by strong operational growth in the business, a fair value adjustment of the warrants, and reduced non-recurring costs.

    Adjusted EBITDA was $40.3 million, an Adjusted EBITDA Margin of 14.7%, compared to 11.2% in the Year Ended 2023.

    Fiscal 2025 Outlook:

    We expect to deliver mid-teens Net Revenue growth and further expansion of Adjusted EBITDA Margin into the mid-to-high teens.

    Net revenue growth is expected to accelerate throughout the year, starting with relatively flat growth in Q1 due to the anniversary of the highly successful Milk Makeup “Jellies” launch from Q1 2024, as well as inventory adjustment in some of our retail partners.

    Growth is expected to accelerate progressively in the following quarters, driven by our innovation pipeline and the continued expansion of our distribution footprint in the U.S. and internationally, including the launch of Milk Makeup at Ulta Beauty in March 2025.

    Year Ended 2024 Highlights

    (In $ millions, except for percentages)   Year
    Ended
    2024
      % Sales   % Growth   % Comparable
    Net Revenue
    Growth
        Year
    Ended
    2023
      % Sales
    Waldencast                          
    Net Revenue   273.9   100.0%   25.5%   27.5%     218.1   100.0%
    Adjusted Gross Profit   203.6   74.3%   35.3%         150.4   69.0%
    Adjusted EBITDA   40.3   14.7%   65.1%         24.4   11.2%
                               
    Obagi Medical                          
    Net Revenue   149.3   100.0%   26.9%   30.7%     117.7   100.0%
    Adjusted Gross Profit   118.6   79.4%   41.6%         83.7   71.2%
    Adjusted EBITDA   30.5   20.4%   46.4%         20.8   17.7%
                               
    Milk Makeup                          
    Net Revenue   124.6   100.0%   24.0%         100.5   100.0%
    Adjusted Gross Profit   85.0   68.2%   27.4%         66.7   66.4%
    Adjusted EBITDA   29.1   23.3%   58.0%         18.4   18.3%
     

    Conference Call and Webcast Information

    Waldencast will host a conference call to discuss its year-end and fourth quarter results on Wednesday, March 19, 2025, at 8:30 AM EDT for the period ended December 31, 2024. Those interested in participating in the conference call are invited to dial (877) 704-4453. International callers may dial (201) 389-0920. A live webcast of the conference call will include a slide presentation and will be available online at https://ir.waldencast.com/. A replay of the webcast will remain available on the website until our next conference call. The information accessible on, or through, our website is not incorporated by reference into this release.

    Non-GAAP Financial Measures

    In addition to the financial measures presented in this release in accordance with U.S. GAAP, Waldencast separately reports financial results on the basis of the measures set out and defined below which are non-GAAP financial measures. Waldencast believes the non-GAAP measures used in this release provide useful information to management and investors regarding certain financial and business trends relating to its financial condition and results of operations. Waldencast believes that the use of these non-GAAP financial measures provides an additional tool for investors to use in evaluating ongoing operating results and trends. These non-GAAP measures also provide perspective on how Waldencast’s management evaluates and monitors the performance of the business.

    There are limitations to non-GAAP financial measures because they exclude charges and credits that are required to be included in GAAP financial presentation. The items excluded from GAAP financial measures such as net income/loss to arrive at non-GAAP financial measures are significant components for understanding and assessing our financial performance. Non-GAAP financial measures should be considered together with, and not alternatives to, financial measures prepared in accordance with GAAP.

    Please refer to definitions set out in the release and the tables included in this release for a reconciliation of these metrics to the most directly comparable GAAP financial measures.

    Comparable Net Revenue is defined as Net Revenue excluding sales related to the former Obagi Medical China business (the “Obagi Medical China Business”), which was not acquired by Waldencast at the time of the business combination with Obagi Medical and Milk Makeup (the “Business Combination”) as was presented in previous earnings releases. The sales to the Obagi Medical China Business have a below market sales price for a defined period of time after the acquisition of Obagi Medical pursuant to the Business Combination. As a result of the Business Combination, a below market contract liability was recognized and is amortized based on sales. This adjustment is shown in the Adjusted EBITDA and Adjusted Gross Profit reconciliations. Management of the Company believes that this non-GAAP measure provides perspective on how Waldencast’s management evaluates and monitors the performance of the business. See reconciliation to U.S. GAAP Net Revenue in the Appendix.

    Comparable Net Revenue Growth is defined as the growth in Comparable Net Revenue period over period expressed as a percentage.

    Adjusted Gross Profit is defined as GAAP gross profit excluding the impact of inventory fair value adjustments, amortization of the supply agreement and formulation intangible assets, discontinued product write-off, and the amortization of the fair value of the related party liability from the Obagi Medical China Business. The Adjusted Gross Profit reconciliation by Segment for each period is included in the Appendix.

    Adjusted Gross Margin is defined as Adjusted Gross Profit divided by GAAP Net Revenue.

    Adjusted EBITDA is defined as GAAP net income (loss) before interest income or expense, income tax (benefit) expense, depreciation and amortization, and further adjusted for the items as described in the reconciliation below. We believe this information will be useful for investors to facilitate comparisons of our operating performance and better identify trends in our business. Adjusted EBITDA excludes certain expenses that are required to be presented in accordance with GAAP because management believes they are non-core to our regular business. These include non-cash expenses, such as depreciation and amortization, stock-based compensation, inventory fair value adjustments, the amortization and release of fair value of the related party liability to the Obagi Medical China Business, change in fair value of financial instruments, loss on impairment of goodwill and leases, and foreign currency translation loss (gain). In addition, adjustments include expenses that are not related to our underlying business performance including (1) legal, advisory and consultant fees related to the financial restatement of previously issued financial statements and associated regulatory investigation, and the Business Combination; (2) costs to recover and the value of the inventory recovered from the acquisition of the SA distributor, and the associated discontinued products; and (3) other non-recurring costs, primarily legal settlement costs and restructuring costs. The Adjusted EBITDA by Segment for each period is included in the Appendix.

    Adjusted EBITDA Margin is defined as Adjusted EBITDA as a percentage of net revenue. The Adjusted EBITDA Margin reconciliation by Segment for each period is included in the Appendix.

    (In thousands, except for percentages)   Three
    Months
    Ended
    December 31,
    2024
      Three
    Months
    Ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Loss   $ (22,597 )   $ (32,731 )   $ (48,648 )   $ (105,968 )
    Adjusted For:                
    Depreciation and amortization     15,013       14,863       60,015       60,498  
    Interest expense, net     4,088       4,276       17,155       18,888  
    Income tax expense (benefit)     4,113       (976 )     110       (6,975 )
    Stock-based compensation expense     2,993       1,677       9,392       9,235  
    Legal and advisory non-recurring costs(1)     3,029       12,949       21,493       32,783  
    Change in fair value of warrants and interest rate collar     443       2,473       (23,679 )     10,443  
    Amortization and release of related party liability(2)     (4,169 )           (5,678 )     (4,058 )
    Loss on impairment of goodwill     5,031             5,031        
    Other costs(3)     3,241       3,083       5,093       9,549  
    Adjusted EBITDA   $ 11,185     $ 5,613     $ 40,284     $ 24,395  
    Net Revenue   $ 72,083     $ 55,117     $ 273,868     $ 218,138  
    Net Loss % of Net Revenue     (31.3 )%     (59.4 )%     (17.8 )%     (48.6 )%
    Adjusted EBITDA Margin     15.5 %     10.2 %     14.7 %     11.2 %
     
    (1) Includes mainly legal, advisory and consultant fees related to the financial restatement 2020-2022 periods and associated regulatory investigation, and the Business Combination.
    (2) Relates to the fair value of the related party liability for the unfavorable discount to the Obagi Medical China Business as part of the Business Combination.
    (3) Other costs include legal settlements, foreign currency translation losses, product discontinuation costs related to advanced purchases for the SA Distributor, the write-down and subsequent recovery of inventory from the SA Distributor, restructuring costs, amortization of the fair value step-up as a result of the business combination, lease impairments, restructuring and contract termination fees.
       

    Net Debt Position is defined as the principal outstanding for the 2022 Term Loan and 2022 Revolving Credit Facility minus the cash and cash equivalents as of December 31, 2024.

    (In thousands)   Reconciliation of
    Net Carrying
    Amount of debt to
    Net Debt
    Current portion of long-term debt   $ 29,479  
    Long-term debt     137,137  
    Net carrying amount of debt     166,616  
    Adjustments:    
    Add: Unamortized debt issuance costs     2,339  
    Less: Cash & cash equivalents     (14,802 )
    Net Debt   $ 154,153  
             

    About Waldencast plc

    Founded by Michel Brousset and Hind Sebti, Waldencast’s ambition is to build a global best-in-class beauty and wellness operating platform by developing, acquiring, accelerating, and scaling conscious, high-growth purpose-driven brands. Waldencast’s vision is fundamentally underpinned by its brand-led business model that ensures proximity to its customers, business agility, and market responsiveness, while maintaining each brand’s distinct DNA. The first step in realizing its vision was the Business Combination. As part of the Waldencast platform, its brands will benefit from the operational scale of a multi-brand platform; the expertise in managing global beauty brands at scale; a balanced portfolio to mitigate category fluctuations; asset light efficiency; and the market responsiveness and speed of entrepreneurial indie brands. For more information please visit: https://ir.waldencast.com.

    Obagi Medical is an industry-leading, advanced skin care line rooted in research and skin biology, refined with a legacy of over 35 years’ experience. First known as leaders in the treatment of hyperpigmentation with the Obagi Nu-Derm® System, Obagi Medical products are designed to address the appearance of premature aging, photodamage, skin discoloration, acne, and sun damage. More information about Obagi Medical is available on the brand’s website at www.obagi.com.

    Founded in 2016, Milk Makeup quickly became a cult-favorite among the beauty community for its values of self-expression and inclusion, captured by its signature “Live Your Look”, its innovative formulas, and clean ingredients. The brand creates vegan, cruelty-free, clean formulas and has its Milk Makeup HQ in Downtown NYC. Currently, Milk Makeup offers over 250 products through its U.S. website www.MilkMakeup.com, and retail partners including Sephora globally, Ulta Beauty in the U.S., Lyko in Scandinavia, Space NK and Boots in the United Kingdom and many more.

    Cautionary Statement Regarding Forward-Looking Statements

    All statements in this release that are not historical, are forward-looking statements made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. Such statements include, but are not limited to, statements about: Waldencast’s outlook and guidance for 2025; our ability to deliver financial results in line with expectations; expectations regarding sales, earnings or other future financial performance and liquidity or other performance measures; our long-term strategy and future operations or operating results; expectations with respect to our industry and the markets in which it operates; future product introductions; developments relating to the ongoing investigation and legal proceedings; and any assumptions underlying any of the foregoing. Words such as “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “intend,” “may,” “plan,” “predict,” “project,” “should,” and “will” and variations of such words and similar expressions are intended to identify such forward-looking statements.

    These forward-looking statements are not guarantees of future performance, conditions or results, and involve a number of known and unknown risks, uncertainties, assumptions and other important factors, many of which are outside of our control, that could cause actual results or outcomes to differ materially from those discussed in the forward-looking statements, including, among others: (i) the impact of the material weaknesses in our internal control over financial reporting, including associated investigations, our efforts to remediate such material weakness and the timing of remediation and resolution of associated investigations; (ii) our ability to recognize the anticipated benefits from any acquired business, including the Business Combination; (iii) our ability to successfully implement our management’s plans and strategies; (iv) the overall economic and market conditions, sales forecasts and other information about our possible or assumed future results of operations or our performance; (v) the general impact of geopolitical events, including the impact of current wars, conflicts or other hostilities; (vi) the potential for delisting, legal proceedings or existing or new government investigation or enforcement actions, including those relating to the restatement or the subject of the Audit Committee of our Board of Directors’ review further described in our annual report filed on Form 20-F for the year ended December 31, 2022, (vii) our ability to manage expenses, our liquidity and our investments in working capital; (viii) any failure to obtain governmental and regulatory approvals related to our business and products; (ix) the impact of any international trade or foreign exchange restrictions, increased tariffs, foreign currency exchange fluctuations; (x) our ability to raise additional capital or complete desired acquisitions; (xi) our ability to comply with financial covenants imposed by the new 2025 credit agreement we entered into referenced in the section entitled “New Credit Facility” above and the impact of debt service obligations and restricted debt covenants; (xii) volatility of Waldencast’s securities due to a variety of factors, including Waldencast’s inability to implement its business plans or meet or exceed its financial projections and changes; (xiii) the ability to implement business plans, forecasts, and other expectations, and identify and realize additional opportunities; (xiv) the ability of Waldencast to implement its strategic initiatives and continue to innovate Obagi Medical’s and Milk Makeup’s existing products and anticipate and respond to market trends and changes in consumer preferences, (xv) any shifts in the preferences of consumers as to where and how they shop; (xvi) the impact of any unfavorable publicity on our business or products; (xvii) changes in future exchange or interest rates or credit ratings; (xviii) changes in, and uncertainty with respect to, laws, regulations, and policies, including as a result of the change in the U.S. administration; and (xix) social, political and economic conditions. These and other risks, assumptions and uncertainties are more fully described in the Risk Factors section of our 2023 20-F (File No. 01-40207), filed with the SEC on April 30, 2024, and in our other documents that we file or furnish with the SEC, which you are encouraged to read.

    Should one or more of these risks or uncertainties materialize, or should underlying assumptions prove incorrect, actual results may vary materially from those indicated or anticipated by such forward-looking statements. Accordingly, you are cautioned not to rely on these forward-looking statements, which speak only as of the date they are made. Waldencast expressly disclaims any current intention, and assumes no duty, to update publicly any forward-looking statement after the distribution of this release, whether as a result of new information, future events, changes in assumptions or otherwise.

    Contacts:

    Investors
    ICR
    Allison Malkin
    waldencastir@icrinc.com

    Media
    ICR
    Brittney Fraser/Alecia Pulman
    waldencast@icrinc.com

    Appendix

    Comparable Net Revenue Growth

        Group   Obagi Medical
    (In thousands, except for percentages)   Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
      Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Revenue   $ 72,083     $ 55,117   $ 273,868     $ 218,138   $ 42,211     $ 32,470   $ 149,266     $ 117,651
    Obagi Medical China Business     735           2,804       5,619     735           2,804       5,619
    Comparable Net Revenue   $ 71,348     $ 55,117   $ 271,064     $ 212,519   $ 41,476     $ 32,470   $ 146,462     $ 112,032
    Comparable Growth     29.4 %         27.5 %         27.7 %         30.7 %    
                                                     

    Adjusted Gross Profit

        Group
    (In thousands, except for percentages)   Three months
    ended
    December 31,
    2024
      Three months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Revenue   $ 72,083     $ 55,117     $ 273,868     $ 218,138  
    Gross Profit     49,450       37,476       191,744       141,577  
    Gross Profit Margin     68.6 %     68.0 %     70.0 %     64.9 %
    Gross Margin Adjustments:                
    Amortization of the fair value of the related party liability(1)     (750 )           (2,260 )     (4,058 )
    Amortization of the inventory fair value adjustment(2)                       1,691  
    Discontinued product write-off(3)     1,139             2,864        
    Amortization impact of intangible assets(4)     2,801       2,801       11,205       11,205  
    Adjusted Gross Profit   $ 52,639     $ 40,277     $ 203,553     $ 150,415  
    Adjusted Gross Margin %     73.0 %     73.1 %     74.3 %     69.0 %
                                     

     

    (1) Relates to the fair value of the related party liability for the unfavorable discount to the Obagi Medical China Business as part of the Business Combination.
    (2) Relates to the amortization of the inventory fair value step-up as a result of the Business Combination.
    (3) Relates to the advance purchase of specific products for the market in Vietnam sold through the SA Distributor that became obsolete when the distribution contract was terminated.
    (4) The Supply Agreement and Formulations intangible assets are amortized to cost of goods sold.
       
        Obagi Medical   Milk Makeup
    (In thousands, except for percentages)   Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
      Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Revenue   $ 42,211     $ 32,470     $ 149,266     $ 117,651     $ 29,872     $ 22,647     $ 124,602     $ 100,487  
    Gross Profit     30,050       23,175       106,760       76,582       19,395       14,301       84,984       64,995  
    Gross Profit Margin     71.2 %     71.4 %     71.5 %     65.1 %     64.9 %     63.1 %     68.2 %     64.7 %
    Gross Margin Adjustments:                                
    Amortization of the fair value of the related party liability     (750 )           (2,260 )     (4,058 )                        
    Amortization of the inventory fair value adjustment                                               1,691  
    Discontinued product write-off     1,139             2,864                                
    Amortization impact of intangible assets     2,801       2,801       11,205       11,205                          
    Adjusted Gross Profit   $ 33,239     $ 25,976     $ 118,569     $ 83,729     $ 19,395     $ 14,301     $ 84,984     $ 66,686  
    Adjusted Gross Margin %     78.7 %     80.0 %     79.4 %     71.2 %     64.9 %     63.1 %     68.2 %     66.4 %
                                                                     

    Adjusted EBITDA Margin by Segment

        Obagi Medical   Milk Makeup
    (In thousands, except for percentages)   Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
      Three
    months
    ended
    December 31,
    2024
      Three
    months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Loss   $ (12,114 )   $ (8,305 )   $ (31,524 )   $ (32,214 )   $ 230     $ (3,959 )   $ 8,803     $ (5,655 )
    Adjusted For:                                
    Depreciation and amortization     10,397       10,425       41,591       41,984       4,616       4,457       18,424       18,514  
    Interest expense, net     3,068       3,341       12,391       12,644       (3 )     4       (1 )     590  
    Income tax expense (benefit)     3,933       (990 )     (141 )     (6,997 )     25       9       32       10  
    Stock-based compensation expense     465       (317 )     (328 )     726       (338 )     444       1,167       2,352  
    Legal and advisory non-recurring costs     1,061       1,119       5,054       1,702                         27  
    Amortization and release of related party liability     (4,169 )           (5,678 )     (4,058 )                        
    Loss on impairment of goodwill     5,031             5,031                                
    Other costs     2,166       2,682       4,120       7,027       285       428       639       2,566  
    Adjusted EBITDA   $ 9,838     $ 7,956     $ 30,516     $ 20,814     $ 4,814     $ 1,383     $ 29,064     $ 18,404  
    Net Revenue   $ 42,211     $ 32,470     $ 149,266     $ 117,651     $ 29,872     $ 22,647     $ 124,602     $ 100,487  
    Net Loss % of Net Revenue     (28.7 )%     (25.6 )%     (21.1 )%     (27.4 )%     0.8 %     (17.5 )%     7.1 %     (5.6 )%
    Adjusted EBITDA Margin     23.3 %     24.5 %     20.4 %     17.7 %     16.1 %     6.1 %     23.3 %     18.3 %
                                                                     
        Central costs
    (In thousands, except for percentages)   Three months
    ended
    December 31,
    2024
      Three months
    ended
    December 31,
    2023
      Year ended
    December 31,
    2024
      Year ended
    December 31,
    2023
    Net Loss   $ (10,714 )   $ (20,467 )   $ (25,927 )   $ (68,099 )
    Adjusted For:                
    Depreciation and amortization           (20 )            
    Interest expense, net     1,024       931       4,765       5,654  
    Income tax expense     155       4       219       12  
    Stock-based compensation expense     2,866       1,549       8,553       6,157  
    Legal and advisory non-recurring costs     1,968       11,830       16,439       31,054  
    Change in fair value of warrants and interest rate collar     443       2,473       (23,679 )     10,443  
    Other costs     789       (26 )     334       (44 )
    Adjusted EBITDA   $ (3,468 )   $ (3,727 )   $ (19,296 )   $ (14,823 )
    Net Revenue   $     $     $     $  
    Net Loss % of Net Revenue     N/A       N/A       N/A       N/A  
    Adjusted EBITDA Margin     N/A       N/A       N/A       N/A  

    The MIL Network

  • MIL-OSI: NVIDIA, Alphabet and Google Collaborate on the Future of Agentic and Physical AI

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — Building on their longstanding partnership, NVIDIA, Alphabet and Google today announced new initiatives to advance AI, democratize access to AI tools, speed the development of physical AI and transform industries including healthcare, manufacturing and energy.

    Engineers and researchers throughout Alphabet are working closely with technical teams at NVIDIA to use AI and simulation to develop robots with grasping skills, reimagine drug discovery, optimize energy grids and more. Employing the NVIDIA Omniverse™, NVIDIA Cosmos™ and NVIDIA Isaac™ platforms, teams from Google DeepMind, Isomorphic Labs, Intrinsic and X’s moonshot Tapestry will discuss milestones from their respective collaborations at the NVIDIA GTC global AI conference.

    To power research and AI production efforts for its customers, Alphabet’s Google Cloud will be among the first to adopt the NVIDIA GB300 NVL72 rack-scale solution and NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU, also announced today at GTC.

    NVIDIA will be the first to adopt SynthID, a Google DeepMind AI watermarking technology for protecting intellectual property by identifying AI-generated content.

    “I’m proud of our ongoing and deep partnership with NVIDIA, which spans the early days of Android and our cutting-edge AI collaborations across Alphabet,” said Sundar Pichai, CEO of Google and Alphabet. “I’m really excited about the next phase of our partnership as we work together on agentic AI, robotics and bringing the benefits of AI to more people around the world.”

    “Alphabet and NVIDIA have a longstanding partnership that extends from building AI infrastructure and software to advancing the use of AI in the largest industries,” said Jensen Huang, founder and CEO of NVIDIA. “It’s a great joy to see Google and NVIDIA researchers and engineers collaborate to solve incredible challenges, from drug discovery to robotics.”

    Developing Responsible AI and Open Models
    Google DeepMind and NVIDIA are working to build trust in generative AI through content transparency.

    NVIDIA will be the first external user of Google DeepMind’s SynthID, which embeds digital watermarks directly into AI-generated images, audio, text and video. SynthID helps preserve the integrity of outputs from NVIDIA Cosmos world foundation models, available on build.nvidia.com, helping to safeguard against misinformation and misattribution — all without compromising video quality.

    Google DeepMind and NVIDIA also partnered to optimize Gemma, Google’s family of lightweight, open models, to run on NVIDIA GPUs. The recent launch of Gemma 3 marks a significant leap forward for open innovation.

    NVIDIA has played a key role in making Gemma even more accessible for developers. Supercharged by the NVIDIA AI platform, Gemma is available as a highly optimized NVIDIA NIM™ microservice, harnessing the power of the open-source NVIDIA TensorRT-LLM library for exceptional inference performance.

    In addition, this deep engineering collaboration will extend to optimizing Gemini-based workloads on NVIDIA accelerated computing via Vertex AI.

    The Age of Intelligent Robots
    Intrinsic is an Alphabet company focused on making intelligently adaptive AI for robotics usable and valuable for manufacturers across industries. Today, the majority of the world’s installed industrial robots are manually programmed, with every movement hard-coded in a complex, expensive process.

    Partnering with NVIDIA, the teams have built deeper and more intuitive developer workflows for Intrinsic Flowstate to support NVIDIA Isaac Manipulator foundation models for a universal robot grasping capability. Using foundation models for robotics will significantly reduce application development time and improve flexibility, with AI that can adapt effortlessly. At GTC, Intrinsic will also share an early OpenUSD framework streaming connection between Intrinsic Flowstate and NVIDIA Omniverse — enabling real-time visualization of robot workcells across platforms.

    Concurrently, NVIDIA and Google DeepMind are announcing a collaboration with Disney Research to develop Newton, an open-source physics engine accelerated by the NVIDIA Warp framework that is compatible with MuJoCo. Powered by Newton, MuJoCo will accelerate robotics machine learning workloads by more than 70x compared with MuJoCo’s existing GPU-accelerated simulator, MJX.

    Applying Innovation to Real-World Challenges
    Isomorphic Labs, founded by Google DeepMind CEO Demis Hassabis, is reimagining drug discovery with AI. It has built a state-of-the-art drug design engine housed on Google Cloud with NVIDIA GPUs to enable the scale and performance needed to continue developing groundbreaking AI models that can help advance human health.

    Tapestry, X’s moonshot for the electric grid, is building AI-powered products for a greener and more reliable future grid. Tapestry and NVIDIA are exploring methods for increasing the speed and accuracy of electric grid simulations.

    This joint effort will focus on the challenges of integrating new energy sources and expanding grid capacity to meet the growing demands of data centers and AI, while helping ensure grid stability. The companies will evaluate potential solutions, including using AI to optimize the interconnection process, with the goal of enhancing the planning and modernization of energy infrastructure for a more sustainable future.

    The Next Generation of AI-Optimized Infrastructure
    Building on its commitment to provide customers with the most advanced AI infrastructure, Google Cloud will be one of the first companies to offer the latest instances of NVIDIA Blackwell GPUs — NVIDIA GB300 NVL72 and NVIDIA RTX PRO 6000 Blackwell Server Edition.

    Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX™ B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper™. NVIDIA RTX PRO 6000 Blackwell is the ultimate universal GPU for both AI and visual computing workloads across healthcare, manufacturing, retail, live broadcast and other industries.

    With last month’s preview launches of its A4 and A4X virtual machines, Google Cloud became the first cloud provider to offer both NVIDIA B200- and GB200-based instances. Now, A4 is generally available — with A4X coming soon — so customers can take advantage of Blackwell’s powerful performance with the added benefits of Google Cloud’s AI Hypercomputer.

    Google Cloud and NVIDIA have worked together to optimize popular open-source frameworks like JAX, a popular Python library for machine learning, and MaxText to run efficiently on NVIDIA GPUs at scale. MaxText, an advanced framework for scaling large models across massive GPU clusters, uses optimizations codeveloped with NVIDIA to enable efficient training on tens of thousands of GPUs.

    GTC attendees interested in learning more about Alphabet and NVIDIA’s work can visit the Google Cloud booth 914.

    About Alphabet Inc.
    Alphabet is a collection of companies, the largest of which is Google. Larry Page and Sergey Brin founded Google in September 1998 and the company is headquartered in Mountain View, Calif. Billions of people use its wide range of popular products and platforms each day, like Search, Ads, Chrome, Cloud, YouTube and Android.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Cliff Edwards
    NVIDIA Corporation
    +1-415-699-2755
    cliffe@nvidia.com

    press@google.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; and the collaboration between NVIDIA and Alphabet and the benefits and impact thereof are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Cosmos, NVIDIA HGX, NVIDIA Hopper, NVIDIA Isaac, NVIDIA Omniverse, and NVIDIA RTX PRO are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/611ce8d4-bb5c-47ff-85d5-591363b25467

    The MIL Network

  • MIL-OSI: S4A IT Solutions Achieves Neptune Software Certification

    Source: GlobeNewswire (MIL-OSI)

    CALGARY, Alberta, March 18, 2025 (GLOBE NEWSWIRE) — S4A IT Solutions (S4A) proudly announces its recent certification from Neptune Software, a global leader in low-code/no-code application development platforms. This significant achievement underscores S4A’s dedication to delivering top-tier solutions to asset-intensive organizations running SAP systems.

    This milestone follows another tremendous recognition for S4A, as the company was recently named Neptune Software’s 2024 North American Partner of the Year. This prestigious award further validates S4A’s commitment to excellence and innovation in developing transformative solutions for Enterprise Asset Management (EAM) organizations.

    Neptune Software’s certification is a hallmark of quality and expertise, signifying that S4A’s developers and product suite meet the highest industry standards. This certification assures clients that S4A’s developers possess advanced proficiency in leveraging the Neptune Software platform, enabling the development of robust, user-friendly applications tailored to the needs of asset-intensive organizations.

    “Achieving Neptune Software certification is a milestone that validates the skill and commitment of our product development team,” shares German Aravena, Vice President of Product Development at S4A. “It reinforces our promise to deliver innovative applications that empower asset-intensive organizations, coupled with material visibility to optimize their SAP investments.”

    S4A’s Envoy Suite, built on the Neptune Software platform, exemplifies this commitment to excellence:

    • Maestro: A logistics management solution streamlining inventory and warehouse processes.
    • Balance: A planning and scheduling application enabling seamless work order management and resource allocation.
    • Tempus: A time entry application simplifying workforce time tracking and enhancing operational accuracy.

    “S4A’s certification of their OEM solution with Neptune marks an important milestone in our partnership,” says David Brockington, Neptune Software’s Director, Americas Partner Ecosystem. “This achievement not only validates the seamless integration of our technology but also paves the way for even greater innovation and efficiency for our joint customers. We look forward to continue working closely with S4A to deliver powerful, enterprise-ready solutions that simplify and accelerate SAP app development.”

    The Neptune Software certification represents S4A’s unwavering focus on technical proficiency, product innovation, and customer satisfaction. Through this partnership and continued development of the Envoy Suite, S4A aims to empower asset-intensive organizations to streamline their operations and maximize their SAP investments.

    Booking Demos Now!
    Organizations ready to elevate their EAM operations can now book a demo of any Envoy product. Visit s4ait.com to learn more and to schedule your personalized demonstration.

    About S4A IT Solutions:
    S4A IT Solutions is a Calgary-based boutique IT solutions and delivery consulting company specializing in providing tailored digital solutions and unparalleled support to Enterprise Asset Management clients across various industries. With a commitment to innovation, excellence, and superior client satisfaction, S4A IT Solutions helps businesses leverage technology to achieve their goals and stay ahead in today’s competitive market.

    For media inquiries, please contact:
    Erika Holter, Marketing Lead
    S4A IT Solutions
    erika.holter@s4ait.com
    s4ait.com

    Photos accompanying this announcement are available at

    https://www.globenewswire.com/NewsRoom/AttachmentNg/06e50a47-14b0-48c1-a928-4cf7ab6d66b3

    https://www.globenewswire.com/NewsRoom/AttachmentNg/8510b2ca-f506-42f9-9931-2d913373e00c

    https://www.globenewswire.com/NewsRoom/AttachmentNg/a3e4f518-7998-424d-90e0-af0823bb85d6

    https://www.globenewswire.com/NewsRoom/AttachmentNg/1910b985-e593-4598-8540-58ccbf9a74a5

    The MIL Network

  • MIL-OSI: NVIDIA and GE HealthCare Collaborate to Advance the Development of Autonomous Diagnostic Imaging With Physical AI

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC NVIDIA today announced a collaboration with GE HealthCare to advance innovation in autonomous imaging, focused on developing autonomous X-ray technologies and ultrasound applications.

    Building autonomy into systems like X-ray and ultrasound requires medical imaging systems to understand and operate in the physical world. This enables the automation of complex workflows such as patient placement, image scanning and quality checking.

    To accomplish this, GE HealthCare, a pioneering partner, is using the new NVIDIA Isaac™ for Healthcare medical device simulation platform, which includes pretrained models and physics-based simulations of sensors, anatomy and environments. The platform accelerates research and development workflows, enabling GE HealthCare to train, test and validate autonomous imaging system capabilities in a virtual environment before deployment in the physical world.

    “The healthcare industry is one of the most important applications of AI, as the demand for healthcare services far exceeds the supply,” said Kimberly Powell, vice president of healthcare at NVIDIA. “We are working with an industry leader, GE HealthCare, to deliver Isaac for Healthcare, three computers to give lifesaving medical devices the ability to act autonomously and extend access to healthcare globally.”

    Expanding Access to Imaging With Physical AI
    Ultrasounds and X-ray are the most common and widely used diagnostic imaging systems, yet nearly two-thirds of the global population lack access. Enhancing imaging systems with robotic capabilities will help expand access to care.

    NVIDIA and GE HealthCare have been working together for nearly two decades, building innovative image-reconstruction techniques across CT and MRI, image-guided therapy and mammography.

    “GE HealthCare is committed to developing innovative technologies that redefine and enhance patient care,” said Roland Rott, president and CEO of Imaging at GE HealthCare. “We look forward to taking advantage of physical AI for autonomous imaging systems with NVIDIA technology to improve patient access and address the challenges of growing workloads and staffing shortages in healthcare.”

    Isaac for Healthcare Closes Gap Between Simulation and Reality
    NVIDIA will also support other customers with Isaac for Healthcare for use cases including simulation environments. Simulation environments enable robotic systems to safely learn skills in a physically accurate virtual environment for real-world situations, such as surgery, that would otherwise be impossible to replicate.

    Isaac for Healthcare is a physical AI platform built on NVIDIA’s three computers for robotics: NVIDIA DGX™, NVIDIA Omniverse™ and NVIDIA Holoscan. It includes AI models fine-tuned for healthcare robotics that can understand, act and see using enhanced vision and language processing. It also has a simulation framework for developers to accurately simulate medical environments and provides seamless deployment on NVIDIA Holoscan, an edge AI computing platform, to power robotic decision-making in the real world, in real time.

    Simulation options for medical sensors are often limited. With Isaac for Healthcare, developers can now access physics-based digital twins of medical environments, allowing them to import custom sensors, instruments and even anatomies to teach robots how to respond to various scenarios. These virtual environments help close the gap between simulation and real-world implementation, and enable rapid digital prototyping.

    Isaac for Healthcare allows for multi-scale simulation ranging from microscopic structures and surgery suites to full hospital facilities. Easy policy training in simulation allows robotic systems to learn how to respond in various medical scenarios in the operating room, and how to best support physician decision-making and patient care.

    Healthcare Robotics Ecosystem Rapidly Expands
    Isaac for Healthcare can help speed the development of robotic healthcare solutions by simulating complex medical scenarios, training AI models and optimizing robotic applications like surgery, endoscopy and cardiovascular interventions. Early adopters include Moon Surgical, Neptune Medical and Xcath.

    Isaac for Healthcare is enabling ecosystem partners to seamlessly integrate their simulation tools, sensors, robot systems and medical probes into a domain-specific simulation environment. Among early ecosystem partners are Ansys, Franka, ImFusion, Kinova and Kuka.

    Issac for Healthcare is now available in early access.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Janette Ciborowski
    Enterprise Communications
    NVIDIA Corporation
    +1-734-330-8817
    jciborowski@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; the collaboration between NVIDIA and GE HealthCare and the benefits and impact thereof; and GE HealthCare driving innovation in the diagnostic imaging industry — and these simulation tools being now in reach for the entire healthcare ecosystem are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA DGX, NVIDIA Isaac and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/f47cd0c2-e934-44d5-aac5-ce681eced9d4

    The MIL Network

  • MIL-OSI: NVIDIA to Build Accelerated Quantum Computing Research Center

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC— NVIDIA today announced it is building a Boston-based research center to provide cutting-edge technologies to advance quantum computing.

    The NVIDIA Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing. The NVAQC will help solve quantum computing’s most challenging problems, ranging from qubit noise to transforming experimental quantum processors into practical devices.

    Leading quantum computing innovators, including Quantinuum, Quantum Machines and QuEra Computing, will tap into the NVAQC to drive advancements through collaborations with researchers from leading universities, such as the Harvard Quantum Initiative in Science and Engineering (HQI) and the Engineering Quantum Systems (EQuS) group at the Massachusetts Institute of Technology (MIT).

    “Quantum computing will augment AI supercomputers to tackle some of the world’s most important problems, from drug discovery to materials development,” said Jensen Huang, founder and CEO of NVIDIA. “Working with the wider quantum research community to advance CUDA-quantum hybrid computing, the NVIDIA Accelerated Quantum Research Center is where breakthroughs will be made to create large-scale, useful, accelerated quantum supercomputers.”

    Propelling Quantum Innovation
    Through the NVAQC, commercial and academic partners will work with NVIDIA to use state-of-the-art NVIDIA GB200 NVL72 rack-scale systems, the most powerful hardware ever deployed for quantum computing applications. This enables complex simulations of quantum systems and the deployment of the low-latency quantum hardware control algorithms essential for quantum error correction. NVIDIA GB200 NVL72 systems will also accelerate the adoption of AI algorithms in quantum computing research.

    To address the challenges of integrating GPU and QPU hardware, the NVAQC will employ the NVIDIA CUDA-Q™ quantum development platform, enabling researchers to develop new hybrid quantum algorithms and applications.

    The HQI — a community of researchers dedicated to advancing the science and engineering of quantum systems and their applications — will collaborate with the NVAQC to advance their research on next-generation quantum computing technologies.

    “The NVAQC is a very special addition to the unique Boston area quantum ecosystem, including word-leading university groups and startup companies,” said Mikhail Lukin, Joshua and Beth Friedman University Professor at Harvard and a co-director of HQI. “The accelerated quantum and classical computing technologies NVIDIA is bringing together has the potential to advance the research in areas ranging from quantum error correction to applications of quantum computing systems, accelerating quantum computing research and pulling useful quantum computing closer to reality.”

    Researchers from the EQuS group, a member of the MIT Center for Quantum Engineering — which serves as a hub for research, education and engagement in support of quantum engineering — will use NVAQC to develop techniques like quantum error correction.

    “The NVIDIA Accelerated Quantum Research Center will provide EQuS group researchers with unprecedented access to the technologies and expertise needed to solve the challenges of useful quantum computing,” said William Oliver, professor of electrical engineering and computer science, and of physics, leader of the EQuS group and director of the MIT Center for Quantum Engineering. “We anticipate the future will also include other members of the Center for Quantum Engineering at MIT. Integrating the NVIDIA accelerated computing platform with qubits will help tackle core challenges like quantum error correction, hybrid application development and quantum device characterization.”

    The NVAQC is expected to begin operations later this year.

    Learn more about NVIDIA’s quantum computing initiatives and hear from industry leaders by joining Quantum Day at NVIDIA GTC, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Alex Shapiro
    Enterprise Networking
    1-415-608-5044
    ashapiro@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the impact and benefits thereof; quantum computing someday augmenting AI supercomputers to tackle some of the world’s most important problems, from drug discovery to materials development; and working with the wider quantum research community to advance CUDA-quantum hybrid computing, the NVIDIA Accelerated Quantum Computing Research Center being where breakthroughs will be made to create large-scale, useful, accelerated quantum supercomputers are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA and CUDA-Q are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/9baec2e8-036a-4c70-b868-1af4797fc282

    The MIL Network

  • MIL-OSI: NVIDIA Announces Major Release of Cosmos World Foundation Models and Physical AI Data Tools

    Source: GlobeNewswire (MIL-OSI)

    • New Models Enable Prediction, Controllable World Generation and Reasoning for Physical AI
    • Two New Blueprints Deliver Massive Physical AI Synthetic Data Generation for Robot and Autonomous Vehicle Post-Training
    • 1X, Agility Robotics, Figure AI, Skild AI Among Early Adopters

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTCNVIDIA today announced a major release of new NVIDIA Cosmos™ world foundation models (WFMs), introducing an open and fully customizable reasoning model for physical AI development and giving developers unprecedented control over world generation.

    NVIDIA is also launching two new blueprints — powered by the NVIDIA Omniverse™ and Cosmos platforms — that provide developers with massive, controllable synthetic data generation engines for post-training robots and autonomous vehicles.

    Industry leaders including 1X, Agility Robotics, Figure AI, Foretellix, Skild AI and Uber are among the first to adopt Cosmos to generate richer training data for physical AI faster and at scale.

    “Just as large language models revolutionized generative and agentic AI, Cosmos world foundation models are a breakthrough for physical AI,” said Jensen Huang, founder and CEO of NVIDIA. “Cosmos introduces an open and fully customizable reasoning model for physical AI and unlocks opportunities for step-function advances in robotics and the physical industries.”

    Cosmos Transfer for Synthetic Data Generation
    Cosmos Transfer WFMs ingest structured video inputs such as segmentation maps, depth maps, lidar scans, pose estimation maps and trajectory maps to generate controllable photoreal video outputs.

    Cosmos Transfer streamlines perception AI training, transforming 3D simulations or ground truth created in Omniverse into photorealistic videos for large-scale, controllable synthetic data generation.

    Agility Robotics will be an early adopter of Cosmos Transfer and Omniverse for large-scale synthetic data generation to train its robot models.

    “Cosmos offers us an opportunity to scale our photorealistic training data beyond what we can feasibly collect in the real world,” said Pras Velagapudi, chief technology officer of Agility Robotics. “We’re excited to see what new performance we can unlock with the platform, while making the most use of the physics-based simulation data we already have.”

    The NVIDIA Omniverse Blueprint for autonomous vehicle simulation uses Cosmos Transfer to amplify variations of physically based sensor data. With the blueprint, Foretellix can enhance behavioral scenarios by varying conditions like weather and lighting for diverse driving datasets. Parallel Domain is also using the blueprint to apply similar variation to its sensor simulation.

    The NVIDIA GR00T Blueprint for synthetic manipulation motion generation combines Omniverse and Cosmos Transfer to generate diverse datasets at scale, benefiting from OpenUSD-powered simulations and reducing data collection and augmentation time from days to hours.

    Cosmos Predict for Intelligent World Generation
    Announced at the CES trade show in January, Cosmos Predict WFMs generate virtual world states from multimodal inputs like text, images and video. New Cosmos Predict models will enable multi-frame generation, predicting intermediate actions or motion trajectories when given start and end input images. Purpose-built for post-training, these models can be customized using NVIDIA’s openly available physical AI dataset.

    With the inference compute power of NVIDIA Grace Blackwell NVL72 systems and their large NVIDIA NVLink™ domain, developers can achieve real-time world generation.

    1X is using Cosmos Predict and Cosmos Transfer to train its new humanoid robot NEO Gamma. Robot brain developer Skild AI is tapping into Cosmos Transfer to augment synthetic datasets for its robots. Plus, Nexar and Oxa are using Cosmos Predict to advance their autonomous driving systems.

    Multimodal Reasoning for Physical AI
    Cosmos Reason is an open, fully customizable WFM with spatiotemporal awareness that uses chain-of-thought reasoning to understand video data and predict the outcomes of interactions — such as a person stepping into a crosswalk or a box falling from a shelf — in natural language.

    Developers can use Cosmos Reason to improve physical AI data annotation and curation, enhance existing world foundation models or create new vision language action models. They can also post-train it to build high-level planners to tell the physical AI what it needs to do to complete a task.

    Accelerating Data Curation and Post-Training for Physical AI
    Based on their downstream task, developers can post-train Cosmos WFMs using native PyTorch scripts or the NVIDIA NeMo framework on NVIDIA DGX™ Cloud.

    Cosmos developers can also use NVIDIA NeMo Curator on DGX Cloud for accelerated data processing and curation. Linker Vision and Milestone Systems are using it for curating large amounts of video data to train large vision language models for visual agents built on the NVIDIA AI Blueprint for video search and summarization. Virtual Incision is exploring it to be deployed in future surgical robots, while Uber and Waabi are advancing autonomous vehicles development.

    Driving Responsible AI and Content Transparency
    In line with NVIDIA’s trustworthy AI principles, NVIDIA enforces open guardrails across all Cosmos WFMs. In addition, NVIDIA is collaborating with Google DeepMind to integrate SynthID to watermark and help identify AI-generated outputs from the Cosmos WFM NVIDIA NIM™ microservice featured on build.nvidia.com.

    Availability
    Cosmos WFMs are available for preview in the NVIDIA API catalog and now listed in the Vertex AI Model Garden on Google Cloud. Cosmos Predict and Cosmos Transfer are openly available on Hugging Face and GitHub. Cosmos Reason is available in early access.

    Learn more by watching the NVIDIA GTC keynote and by registering for Cosmos sessions and training from NVIDIA and industry leaders at the show, including “An Introduction to Cosmos World Foundation Models” with Ming-Yu Liu, vice president of generative AI research at NVIDIA.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Paris Fox
    Corporate Communications
    NVIDIA Corporation
    +1-408-242-0035
    pfox@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; and Cosmos opening opportunities for step-function advances in robotics and the physical industries are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Cosmos, NVIDIA DGX, NVIDIA NeMo, NVIDIA NIM, NVIDIA Omniverse and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/6c781321-9544-4bbf-bb47-8bab73fe2f63

    The MIL Network

  • MIL-OSI: NVIDIA Omniverse Physical AI Operating System Expands to More Industries and Partners

    Source: GlobeNewswire (MIL-OSI)

    • Accenture, Ansys, Cadence, Databricks, Dematic, Hexagon, Omron, SAP, Schneider Electric With ETAP, Siemens Connect Omniverse to Leading Software Tools
    • Four New Blueprints Enable Robot-Ready Factories and Large-Scale Synthetic Data Generation
    • Foxconn, General Motors, Hyundai Motor Group, KION Group, Mercedes-Benz, Pegatron and Schaeffler Adopt Omniverse for Industrial AI Transformation

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled that leading industrial software and service providers Ansys, Databricks, Dematic, Omron, SAP, Schneider Electric with ETAP, Siemens and more are integrating the NVIDIA Omniverse™ platform into their solutions to accelerate industrial digitalization with physical AI.

    New NVIDIA Omniverse Blueprints connected to NVIDIA Cosmos™ world foundation models are now available to enable robot-ready facilities and large-scale synthetic data generation for physical AI development.

    “Omniverse is an operating system that connects the world’s physical data to the realm of physical AI,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “With Omniverse, global industrial software, data and professional services leaders are uniting industrial ecosystems and building new applications that will advance the next generation of AI for industries at unprecedented speed.”

    New Blueprints Enable Robot-Ready Facilities and Large-Scale Synthetic Data Generation
    Mega, an Omniverse Blueprint for testing multi-robot fleets at scale in industrial digital twins, is now available in preview on build.nvidia.com. Also available is the NVIDIA AI Blueprint for video search and summarization, powered by the NVIDIA Metropolis platform, for building AI agents that monitor activity across entire facilities.

    Manufacturing leaders are using the blueprints to optimize their industrial operations with physical AI.

    In automotive manufacturing, Schaeffler and Accenture are starting to adopt Mega to test and simulate fleets of Agility Robotics Digit for material-handling automation. Hyundai Motor Group is using the blueprint to simulate Boston Dynamics Atlas robots on its assembly lines, and Mercedes-Benz is using it to simulate Apptronik’s Apollo humanoid robots to optimize vehicle assembly operations.

    In electronics manufacturing, Pegatron is using Mega to develop physical AI-based NVIDIA Metropolis video analytics agents to improve factory operations and worker safety. Foxconn is using the blueprint to simulate industrial manipulators, humanoids and mobile robots in its manufacturing facilities for the NVIDIA Blackwell platform.

    “Foxconn is constantly exploring ways to transform our operations as we continue our journey toward building the factories of the future,” said Brand Cheng, CEO of Fii, a core subsidiary of Foxconn. “Using NVIDIA Omniverse and Mega, we’re testing and training humanoids to operate in our leading factories as we advance to the next wave of physical AI.”

    For warehouses and supply chain solutions, KION Group, Dematic and Accenture announced they are integrating Mega to advance next-generation AI-powered automation. idealworks is integrating Mega into its fleet management software to simulate, test and optimize robotic fleets. SAP customers and partners can use Omniverse to develop their own virtual environments for warehouse management scenarios.

    A new Omniverse Blueprint for AI factory digital twins lets data center engineers design and simulate AI factory layouts, cooling and electrical to maximize utilization and efficiency. Cadence Reality Digital Twin Platform and Schneider Electric with ETAP are the first to integrate their simulation software into the blueprint, while Vertiv and Schneider Electric are providing Omniverse SimReady 3D models of their power and cooling units to accelerate the development of AI factory digital twins.

    The NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation is also now available for robotics developers, enabling large-scale synthetic data generation from Omniverse and Cosmos. The blueprint helps humanoid developers reduce data collection time from hours to minutes, fast-tracking robot development. 

    Omniverse Physical AI Operating System Expands Across Industries
    Digitalization is challenging for industries grounded in the physical world. Massive amounts of digital and physical world data from legacy systems create silos. Omniverse is an operating system built on the OpenUSD framework that enables developers to unify physical-world data and applications.

    Ansys, Cadence, Hexagon, Omron, Rockwell Automation and Siemens are integrating Omniverse data interoperability and visualization technologies into their leading industrial software, simulation and automation solutions to accelerate product development and optimize manufacturing processes.

    For physical AI, Intrinsic, an Alphabet company, is enabling Omniverse workflows and NVIDIA robotics foundation models to transition from digital twins to hardware deployments using Flowstate. Databricks is integrating NVIDIA Omniverse with the Databricks Data Intelligence Platform, which will enable large-scale synthetic data generation for physical AI.

    General Motors, America’s largest auto manufacturer, announced its adoption of Omniverse to enhance its factories and train platforms for operations such as material handling, transportation and precision welding. At the other end of the manufacturing life cycle, Unilever announced its adoption of Omniverse and physically accurate digital twins to streamline and optimize marketing content creation for its products.

    Omniverse in Every Cloud
    To simplify development, deployment and scale-out of OpenUSD-based applications, NVIDIA Omniverse is now available as virtual desktop images on EC2 G6e instances with NVIDIA L40S GPUs in AWS Marketplace. The Microsoft Azure Marketplace now features preconfigured Omniverse instances and Omniverse Kit App Streaming on NVIDIA A10 GPUs, allowing developers to easily develop and stream their custom Omniverse applications.

    These cloud-based NVIDIA Omniverse developer tools and services are expected to be available later this year on Oracle Cloud Infrastructure compute bare-metal instances with NVIDIA L40S GPUs, as well as the newly announced NVIDIA RTX PRO™ 6000 Blackwell Server Edition on Google Cloud.

    OpenUSD Unifies Robotics Workflows
    At GTC, NVIDIA introduced the OpenUSD Asset Structure Pipeline for Robotics with Disney Research and Intrinsic. This new structure and data pipeline uses today’s best practices within OpenUSD to work toward unifying robotic workflows, providing a common language for all data sources.

    Learn more by watching the NVIDIA GTC keynote and registering for OpenUSD, physical AI and industrial AI sessions, as well as trainings featuring NVIDIA experts and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Quentin Nolibois
    Corporate Communications
    NVIDIA Corporation
    +1-415-741-8356
    qnolibois@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies, the benefits and impact thereof, and the availability of their offerings; with Omniverse, global industrial software, data and professional services leaders uniting industrial ecosystems and building new applications that will advance the next generation of AI for industries at unprecedented speed; and digitalization challenging for industries grounded in the physical world  are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Cosmos, NVIDIA Omniverse and NVIDIA RTX PRO are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/4d263d7d-238c-46b1-a11c-424703a906ab

    The MIL Network

  • MIL-OSI: NVIDIA Blackwell Ultra DGX SuperPOD Delivers Out-of-the-Box AI Supercomputer for Enterprises to Build AI Factories

    Source: GlobeNewswire (MIL-OSI)

    • NVIDIA Blackwell Ultra-Powered DGX Systems Supercharge AI Reasoning for Real-Time AI Agent Responses
    • Equinix First to Offer NVIDIA Instant AI Factory Service, With Preconfigured Space in Blackwell-Ready Facilities for DGX GB300 and DGX B300 Systems to Meet Global Demand for AI Infrastructure

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTCNVIDIA today announced the world’s most advanced enterprise AI infrastructure — NVIDIA DGX SuperPOD™ built with NVIDIA Blackwell Ultra GPUs — which provides enterprises across industries with AI factory supercomputing for state-of-the-art agentic AI reasoning.

    Enterprises can use new NVIDIA DGX™ GB300 and NVIDIA DGX B300 systems, integrated with NVIDIA networking, to deliver out-of-the-box DGX SuperPOD AI supercomputers that offer FP4 precision and faster AI reasoning to supercharge token generation for AI applications.

    AI factories provide purpose-built infrastructure for agentic, generative and physical AI workloads, which can require significant computing resources for AI pretraining, post-training and test-time scaling for applications running in production.

    “AI is advancing at light speed, and companies are racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling,” said Jensen Huang, founder and CEO of NVIDIA. “The NVIDIA Blackwell Ultra DGX SuperPOD provides out-of-the-box AI supercomputing for the age of agentic and physical AI.”

    DGX GB300 systems feature NVIDIA Grace Blackwell Ultra Superchips — which include 36 NVIDIA Grace™ CPUs and 72 NVIDIA Blackwell Ultra GPUs — and a rack-scale, liquid-cooled architecture designed for real-time agent responses on advanced reasoning models.

    Air-cooled NVIDIA DGX B300 systems harness the NVIDIA B300 NVL16 architecture to help data centers everywhere meet the computational demands of generative and agentic AI applications.

    To meet growing demand for advanced accelerated infrastructure, NVIDIA also unveiled NVIDIA Instant AI Factory, a managed service featuring the Blackwell Ultra-powered NVIDIA DGX SuperPOD. Equinix will be first to offer the new DGX GB300 and DGX B300 systems in its preconfigured liquid- or air-cooled AI-ready data centers located in 45 markets around the world.

    NVIDIA DGX SuperPOD With DGX GB300 Powers Age of AI Reasoning
    DGX SuperPOD with DGX GB300 systems can scale up to tens of thousands of NVIDIA Grace Blackwell Ultra Superchips — connected via NVIDIA NVLink™, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum-X™ Ethernet networking — to supercharge training and inference for the most compute-intensive workloads.

    DGX GB300 systems deliver up to 70x more AI performance than AI factories built with NVIDIA Hopper™ systems and 38TB of fast memory to offer unmatched performance at scale for multistep reasoning on agentic AI and reasoning applications.

    The 72 Grace Blackwell Ultra GPUs in each DGX GB300 system are connected by fifth-generation NVLink technology to become one massive, shared memory space through the NVLink Switch system.

    Each DGX GB300 system features 72 NVIDIA ConnectX®-8 SuperNICs, delivering accelerated networking speeds of up to 800Gb/s — double the performance of the previous generation. Eighteen NVIDIA BlueField®-3 DPUs pair with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-X Ethernet to accelerate performance, efficiency and security in massive-scale AI data centers.

    DGX B300 Systems Accelerate AI for Every Data Center
    The NVIDIA DGX B300 system is an AI infrastructure platform designed to bring energy-efficient generative AI and AI reasoning to every data center.

    Accelerated by NVIDIA Blackwell Ultra GPUs, DGX B300 systems deliver 11x faster AI performance for inference and a 4x speedup for training compared with the Hopper generation.

    Each system provides 2.3TB of HBM3e memory and includes advanced networking with eight NVIDIA ConnectX-8 SuperNICs and two BlueField-3 DPUs.

    NVIDIA Software Accelerates AI Development and Deployment
    To enable enterprises to automate the management and operations of their infrastructure, NVIDIA also announced NVIDIA Mission Control™ — AI data center operation and orchestration software for Blackwell-based DGX systems.

    NVIDIA DGX systems support the NVIDIA AI Enterprise software platform for building and deploying enterprise-grade AI agents. This includes NVIDIA NIM™ microservices, such as the new NVIDIA Llama Nemotron open reasoning model family announced today, and NVIDIA AI Blueprints, frameworks, libraries and tools used to orchestrate and optimize performance of AI agents.

    NVIDIA Instant AI Factory to Meet Infrastructure Demand
    NVIDIA Instant AI Factory offers enterprises an Equinix managed service featuring the Blackwell Ultra-powered NVIDIA DGX SuperPOD with NVIDIA Mission Control software.

    With dedicated Equinix facilities around the globe, the service will provide businesses with fully provisioned, intelligence-generating AI factories optimized for state-of-the-art model training and real-time reasoning workloads — eliminating months of pre-deployment infrastructure planning.

    Availability
    NVIDIA DGX SuperPOD with DGX GB300 or DGX B300 systems are expected to be available from partners later this year.

    NVIDIA Instant AI Factory is planned to be available starting later this year.

    Learn more by watching the NVIDIA GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Allie Courtney
    NVIDIA Corporation
    +1-408-706-8995
    acourtney@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; and AI advancing at light speed, and companies racing to build AI factories that can scale to meet the processing demands of reasoning AI and inference time scaling are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, ConnectX, DGX, NVIDIA DGX SuperPOD, NVIDIA Grace, NVIDIA Hopper, NVIDIA Mission Control, NVIIDA NIM, NVIDIA Spectrum-X and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/4f5747e8-5b3d-4764-9d34-6d63cbfb18c2

    The MIL Network

  • MIL-OSI: NVIDIA Blackwell Accelerates Computer-Aided Engineering Software by Orders of Magnitude for Real-Time Digital Twins

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC NVIDIA today announced that leading computer-aided engineering (CAE) software vendors, including Ansys, Altair, Cadence, Siemens and Synopsys, are accelerating their simulation tools by up to 50x with the NVIDIA Blackwell platform.

    With such accelerated software, along with NVIDIA CUDA-X™ libraries and blueprints to further optimize performance, industries such as automotive, aerospace, energy, manufacturing and life sciences can significantly reduce product development time, cut costs and increase design accuracy while maintaining energy efficiency.

    “CUDA-accelerated physical simulation on NVIDIA Blackwell has enhanced real-time digital twins and is reimagining the entire engineering process,” said Jensen Huang, founder and CEO of NVIDIA. “The day is coming when virtually all products will be created and brought to life as a digital twin long before it is realized physically.”

    Ecosystem Support for NVIDIA Blackwell
    Software providers can help their customers develop digital twins with real-time interactivity and now accelerate them with NVIDIA Blackwell technologies.

    The growing ecosystem integrating Blackwell into its software includes Altair, Ansys, BeyondMath, Cadence, COMSOL, ENGYS, Flexcompute, Hexagon, Luminary Cloud, M-Star, NAVASTO, an Autodesk company, Neural Concept, nTop, Rescale, Siemens, Simscale, Synopsys and Volcano Platforms.

    Cadence is using NVIDIA Grace Blackwell-accelerated systems to help solve one of computational fluid dynamics’ biggest challenges — the simulation of an entire aircraft during takeoff and landing. Using the Cadence Fidelity CFD solver, Cadence successfully ran multibillion cell simulations on a single NVIDIA GB200 NVL72 server in under 24 hours, which would have previously required a CPU cluster with hundreds of thousands of cores and several days to complete.

    This breakthrough will help the aerospace industry move toward designing safer, more efficient aircrafts while reducing the amount of expensive wind-tunnel testing required, speeding time to market.

    Anirudh Devgan, president and CEO of Cadence, said, “NVIDIA Blackwell’s acceleration of the Cadence.AI portfolio delivers increased productivity and quality of results for intelligent system design — reducing engineering tasks that took hours to minutes and unlocking simulations not possible before. Our collaboration with NVIDIA drives innovation across semiconductors, data centers, physical AI and sciences.”

    Sassine Ghazi, president and CEO of Synopsys, said, “At GTC, we’re unveiling the latest performance results observed across our leading portfolio when optimizing Synopsys solutions for NVIDIA Blackwell to accelerate computationally intensive chip design workflows. Synopsys technology is mission-critical to the productivity and capabilities of engineering teams, from silicon to systems. By harnessing the power of NVIDIA accelerated computing, we can help customers unlock new levels of performance and deliver their innovations even faster.”

    Ajei Gopal, president and CEO of Ansys, said, “The close collaboration between Ansys and NVIDIA is accelerating innovation at an unprecedented pace. By harnessing the computational performance of NVIDIA Blackwell GPUs, we at Ansys are empowering engineers at Volvo Cars to tackle the most complex computational fluid dynamics challenges with exceptional speed and accuracy — enabling more optimization studies and delivering more performant vehicles.”

    James Scapa, founder and CEO of Altair, said, “The NVIDIA Blackwell platform’s computing power, combined with Altair’s cutting-edge simulation tools, gives users transformative capabilities. This combination makes GPU-based simulations up to 1.6x faster compared with the previous generation, helping engineers rapidly solve design challenges and giving industries the power to create safer, more sustainable products through real-time digital twins and physics-informed AI.”

    Roland Busch, president and CEO of Siemens, said, “The combination of NVIDIA’s groundbreaking Blackwell architecture with Siemens’ physics-based digital twins will enable engineers to drastically reduce development times and costs through using photo-realistic, interactive digital twins. This collaboration will allow us to help customers like BMW innovate faster, optimize processes and achieve remarkable levels of efficiency in design and manufacturing.”

    Rescale CAE Hub With NVIDIA Blackwell
    Rescale’s newly launched CAE Hub enables customers to streamline their access to NVIDIA technologies and CUDA®-accelerated software developed by leading independent software vendors. Rescale CAE Hub provides flexible, high-performance computing and AI technologies in the cloud powered by NVIDIA GPUs and NVIDIA DGX™ Cloud.

    Boom Supersonic, the company building the world’s fastest airliner, will use the NVIDIA Omniverse Blueprint for real-time digital twins and Blackwell-accelerated CFD solvers on Rescale CAE Hub to design and optimize its new supersonic passenger jet.

    The company’s product development cycle, which is almost entirely simulation-driven, will use the Rescale platform accelerated by Blackwell GPUs to test different flight conditions and refine requirements in a continuous loop with simulation.

    The adoption of the Rescale CAE Hub powered by Blackwell GPUs expands Boom Supersonic’s collaboration with NVIDIA. Through the NVIDIA PhysicsNeMo™ framework and the Rescale AI Physics platform, Boom Supersonic can unlock 4x more design explorations for its supersonic airliner, speeding iteration to improve performance and time to market.

    NVIDIA Omniverse Blueprint Now Broadly Accessible for Enterprises
    The NVIDIA Omniverse Blueprint for real-time digital twins, now generally available, is also part of the Rescale CAE Hub. The blueprint brings together NVIDIA CUDA-X libraries, NVIDIA PhysicsNeMo AI and the NVIDIA Omniverse™ platform — and is also adding the first NVIDIA NIM™ microservice for external aerodynamics, the study of how air moves around objects.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Steve Gartner
    NVIDIA Corporation
    +1-513-479-4060
    sgartner@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies and the benefits and impact thereof; the day coming when products will be created and brought to life as a digital twin long before it is realized physically; and real-time digital twins revolutionizing the physical industries are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA, CUDA-X, DGX, NVIDIA NIM, PhysicsNeMo, and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/9d8b1936-99e9-4170-9a34-5f1f25a1e88e

    The MIL Network

  • MIL-OSI: NVIDIA and Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the Age of AI

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — NVIDIA today announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software.

    Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software — including NVIDIA NIM™ microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities — as well as the new NVIDIA AI-Q Blueprint.

    Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField®DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library.

    Leading data platform and storage providers — including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA — are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries.

    “Data is the raw material powering industries in the age of AI,” said Jensen Huang, founder and CEO of NVIDIA. “With the world’s storage leaders, we’re building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers.”

    NVIDIA AI Data Platform Adds Accelerated Computing and AI to Storage
    The NVIDIA AI Data Platform brings accelerated computing and AI to the millions of businesses using enterprise storage for the data that drives their company.

    NVIDIA Blackwell GPUs, BlueField DPUs and Spectrum-X networking provide an accelerated engine to speed AI query agent access to data stored on enterprise systems. BlueField DPUs deliver up to 1.6x higher performance than CPU-based storage while reducing power consumption by up to 50%, providing more than 3x higher performance per watt. Spectrum-X accelerates AI storage traffic up to 48% compared with traditional Ethernet by applying adaptive routing and congestion control.

    AI Data Platform storage infrastructure uses the NVIDIA AI-Q Blueprint for developing agentic systems that can reason and connect to enterprise data. AI-Q taps into NVIDIA NeMo Retriever™ microservices to accelerate data extraction and retrieval by up to 15x on NVIDIA GPUs.

    AI query agents built with the AI-Q Blueprint connect to data during inference to provide more accurate, context-aware responses. They can access large-scale data quickly and process various data types, including structured, semi-structured and unstructured data from multiple sources, including text, PDF, images and video.

    Storage Industry Leaders Building AI Data Platforms With NVIDIA
    NVIDIA-Certified Storage partners are collaborating with NVIDIA to build custom AI data platforms.

    • DDN is architecting AI Data Platform capabilities into its DDN Infinia AI platform.
    • Dell is creating AI data platforms for its family of Dell PowerScale and Project Lightning solutions.
    • Hewlett Packard Enterprise is infusing AI Data Platform capabilities into HPE Private Cloud for AI, HPE Data Fabric, HPE Alletra Storage MP and HPE GreenLake for File Storage.
    • Hitachi Vantara is bringing AI Data Platform into the Hitachi IQ ecosystem, helping customers innovate with storage systems and data offerings that drive tangible AI outcomes.
    • IBM is integrating AI Data Platform as part of its content-aware storage capability with IBM Fusion and IBM Storage Scale technology to accelerate retrieval-augmented generation applications.
    • NetApp is advancing enterprise storage for agentic AI with the NetApp AIPod solution built with AI Data Platform.
    • Nutanix Cloud Platform with Nutanix Unified Storage will integrate with the NVIDIA AI Data Platform and enable inferencing and agentic workflows deployed across edge, data center and public cloud.
    • Pure Storage will deliver AI Data Platform capabilities with Pure Storage FlashBlade.
    • VAST Data is working with AI Data Platform to curate real-time insights with VAST InsightEngine.
    • WEKA Data Platform software integrates with NVIDIA GPUs, DPUs and networking to optimize data access for agentic AI reasoning and insights and deliver a high-performance storage foundation that accelerates AI inference and token processing workloads.

    NVIDIA-Certified Storage providers are planning to offer solutions created with the NVIDIA AI Data platform starting this month.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.
    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Alex Shapiro
    Enterprise Networking
    1-415-608-5044
    ashapiro@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; NVIDIA’s collaboration with third parties; third parties adopting or offering NVIDIA’s products and technologies; and with the world’s storage leaders, NVIDIA building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, NVIDIA NeMo, NVIDIA NIM and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/5ecf8d79-95ab-4140-809f-bd1d6aaa111d

    The MIL Network

  • MIL-OSI: Climate Tech Companies Adopt NVIDIA Earth-2 for High-Resolution, Energy-Efficient, More Accurate Weather Predictions and Disaster Preparedness

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced the NVIDIA Omniverse Blueprint for Earth-2 weather analytics to accelerate the development of more accurate weather forecasting solutions.

    Climate-related weather events have had a $2 trillion impact on the global economy over the last decade. The new Omniverse Blueprint equips users with the latest technologies to help global organizations improve risk management and disaster preparedness.

    The NVIDIA Omniverse Blueprint for Earth-2 weather analytics offers reference workflows — including NVIDIA GPU acceleration libraries, a physics-AI framework, development tools and microservices — to help enterprises go from prototyping to production with weather forecast models.

    Easy-to-deploy NVIDIA NIM™ microservices for NVIDIA Earth-2 are also part of the blueprint, including CorrDiff for downscaling and FourCastNet for predicting global atmospheric dynamics of various weather and climate variables. These are already being used by weather technology companies, researchers and government agencies to derive insights and mitigate risk from extreme weather events.

    “We’re seeing more extreme weather events and natural disasters than ever, threatening lives and property,” said Jensen Huang, founder and CEO of NVIDIA. “The NVIDIA Omniverse Blueprint for Earth-2 will help industries around the world prepare for — and mitigate — climate change and weather-related disasters.”

    Ecosystem Support
    Industry-leading climate tech companies including AI company G42, JBA Risk Management, Spire and others are using the blueprint to develop unique AI-augmented solutions.

    When combined with proprietary enterprise data in the $20 billion climate tech industry, the NVIDIA Earth-2 platform helps developers build solutions that deliver warnings and updated forecasts in seconds rather than minutes or hours with traditional CPU-driven modeling.

    G42 is integrating various components of the Omniverse Blueprint with its own AI-driven forecasting models for Earth-2 to provide the UAE’s National Center of Meteorology with AI technologies for advanced weather forecasting and disaster management.

    “G42 is advancing AI-powered forecasting to help governments and enterprises strengthen resilience against extreme weather in a rapidly changing world,” said Andrew Jackson, CEO of Inception, a G42 company. “Using high-resolution weather and climate modeling, we are transforming how organizations anticipate and respond to severe weather conditions with precision and speed. Building on NVIDIA’s CorrDiff model, we have developed a custom AI-driven system that downscales coarse weather data into hyper-local forecasts, enabling faster predictions at an unprecedented scale. Combined with the Earth-2 Blueprint, this technology equips decision-makers with actionable intelligence to protect communities, safeguard infrastructure and plan for a more resilient future.”

    Spire Global used AI components from the blueprint as reference to develop new AI products that integrate its proprietary satellite data and deliver medium-range and sub-seasonal forecasts out to 45 days. Powered by NVIDIA GPUs and the Omniverse Blueprint for Earth-2, Spire’s models run 1,000x faster than traditional physics-based models, enabling large ensemble forecasts that capture the full range of possible weather outcomes.

    In addition to the Central Weather Administration of Taiwan and The Weather Company, other companies adopting or exploring Earth-2 include 3D mapping company Ecopia, spatial analytics company ESRI, green energy company GCL Power, flood risk management company JBA Risk Management, aerospace company OroraTech, and Tomorrow.io, a leading resilience platform powered by proprietary space data and weather intelligence.

    Groundbreaking Generative AI for Climate Tech
    The Earth-2 platform offers tools, microservices and an array of state-of-the-art AI weather models for visualizing and simulating the globe.

    CorrDiff, part of the Omniverse Blueprint, is available as an NVIDIA NIM microservice. Compared with CPUs, it can be 500x faster and 10,000x more energy-efficient in delivering high-resolution numerical weather predictions.

    The Omniverse Blueprint for Earth-2 allows independent software vendors to develop and deploy AI-augmented solutions and use observational data to make their solutions faster and more accurate.

    Esri, a leader in geospatial technology, is collaborating with NVIDIA to connect its ArcGIS platform to Earth-2 through the blueprint. OroraTech is exploring connecting its data platform to the Omniverse Blueprint for Earth-2.

    Tomorrow.io contributed its near-real-time proprietary satellite data to help create an NVIDIA digital twin of Earth for next-generation AI model training, inference and reinforcement.

    A key component of the new blueprint is NVIDIA Omniverse™, a platform for developing OpenUSD-based 3D workflows and applications. The Omniverse Blueprint for Earth-2 showcases how developers can use Omniverse software development kits and microservices to build NVIDIA RTX™-powered visualization pipelines for rendering geospatial and weather data.

    NVIDIA DGX Cloud-Powered Compute
    The Omniverse Blueprint for Earth-2 taps into the NVIDIA DGX™ Cloud platform to demonstrate full-stack acceleration for AI-augmented weather forecasting. Running on NVIDIA DGX GB200, NVIDIA HGX™ B200 and NVIDIA OVX™ supercomputers, the blueprint provides a path to simulating and visualizing the global climate simulations at exceptional speed and scale.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Cliff Edwards
    Enterprise Communications
    NVIDIA Corporation
    +1-415-699-2755
    cliffe@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; and the NVIDIA Omniverse Blueprint for Earth-2 helping industries around the world prepare for — and mitigate — climate change and weather-related disasters are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA NIM, NVIDIA Omniverse, NVIDIA OVX, NVIDIA DGX, NVIDIA HGX and NVIDIA RTX are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/8b94536e-e49f-4a2c-a364-07c68495e476

    The MIL Network

  • MIL-OSI: NVIDIA Announces DGX Spark and DGX Station Personal AI Computers

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled NVIDIA DGX™ personal AI supercomputers powered by the NVIDIA Grace Blackwell platform.

    DGX Spark — formerly Project DIGITS — and DGX Station™, a new high-performance NVIDIA Grace Blackwell desktop supercomputer powered by the NVIDIA Blackwell Ultra platform, enable AI developers, researchers, data scientists and students to prototype, fine-tune and inference large models on desktops. Users can run these models locally or deploy them on NVIDIA DGX Cloud or any other accelerated cloud or data center infrastructure.

    DGX Spark and DGX Station bring the power of the Grace Blackwell architecture, previously only available in the data center, to the desktop. Global system builders to develop DGX Spark and DGX Station include ASUS, Dell, HP Inc. and Lenovo.

    “AI has transformed every layer of the computing stack. It stands to reason a new class of computers would emerge — designed for AI-native developers and to run AI-native applications,” said Jensen Huang, founder and CEO of NVIDIA. “With these new DGX personal AI computers, AI can span from cloud services to desktop and edge applications.”

    Igniting Innovation With DGX Spark
    DGX Spark is the world’s smallest AI supercomputer, empowering millions of researchers, data scientists, robotics developers and students to push the boundaries of generative and physical AI with massive performance and capabilities.

    At the heart of DGX Spark is the NVIDIA GB10 Grace Blackwell Superchip, optimized for a desktop form factor. GB10 features a powerful NVIDIA Blackwell GPU with fifth-generation Tensor Cores and FP4 support, delivering up to 1,000 trillion operations per second of AI compute for fine-tuning and inference with the latest AI reasoning models, including the NVIDIA Cosmos Reason world foundation model and NVIDIA GR00T N1 robot foundation model.

    The GB10 Superchip uses NVIDIA NVLink™-C2C interconnect technology to deliver a CPU+GPU-coherent memory model with 5x the bandwidth of fifth-generation PCIe. This lets the superchip access data between a GPU and CPU to optimize performance for memory-intensive AI developer workloads.

    NVIDIA’s full-stack AI platform enables DGX Spark users to seamlessly move their models from their desktops to DGX Cloud or any accelerated cloud or data center infrastructure — with virtually no code changes — making it easier than ever to prototype, fine-tune and iterate on their workflows.

    Full Speed Ahead With DGX Station
    NVIDIA DGX Station brings data-center-level performance to desktops for AI development. The first desktop system to be built with the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, DGX Station features a massive 784GB of coherent memory space to accelerate large-scale training and inferencing workloads. The GB300 Desktop Superchip features an NVIDIA Blackwell Ultra GPU with latest-generation Tensor Cores and FP4 precision — connected to a high-performance NVIDIA Grace™ CPU via NVLink-C2C — delivering best-in-class system communication and performance.

    DGX Station also features the NVIDIA ConnectX®-8 SuperNIC, optimized to supercharge hyperscale AI computing workloads. With support for networking at up to 800Gb/s, the ConnectX-8 SuperNIC delivers extremely fast, efficient network connectivity, enabling high-speed connectivity of multiple DGX Stations for even larger workloads, and network-accelerated data transfers for AI workloads.

    Combining these state-of-the-art DGX Station capabilities with the NVIDIA CUDA-X™ AI platform, teams can achieve exceptional desktop AI development performance.

    In addition, users gain access to NVIDIA NIM™ microservices with the NVIDIA AI Enterprise software platform, which offers highly optimized, easy-to-deploy inference microservices backed by enterprise support.

    Availability
    Reservations for DGX Spark systems open today at nvidia.com.

    DGX Station is expected to be available from manufacturing partners like ASUS, BOXX, Dell, HP, Lambda and Supermicro later this year.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Pearlina Boc
    NVIDIA Corporation
    +1-562-275-5781
    pboc@nvidia.com  

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; by putting the NVIDIA Grace Blackwell Superchip on every desk, and at every AI developer’s fingertips, NVIDIA empowering millions of people to shape the future of AI; and with new DGX AI supercomputers, software providers, government agencies, startups and researchers being able to prototype, fine-tune and run large AI models — transforming the way they work and create are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Connect-X, CUDA-X, DGX, DGX Station, NVIDIA Grace, NVIDIA NIM and NVLink are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/a1933d3b-32bd-450a-88f9-cd3df95857e1

    The MIL Network

  • MIL-OSI: First Farmers Financial Corp. Declares Record Dividend

    Source: GlobeNewswire (MIL-OSI)

    Converse, Indiana, March 18, 2025 (GLOBE NEWSWIRE) — First Farmers Financial Corp. (OTCQX Banks; FFMR), the parent company of First Farmers Bank & Trust Co., announced that on March 18, 2025, the Board of Directors approved a record quarterly cash dividend of $0.49 per share, payable on April 15, 2025, to shareholders of record as of March 31, 2025. This quarterly dividend represents a 2.1% increase over the $0.48 dividend declared in March 2024.

    First Farmers Financial Corp is a $3.2 billion financial holding company headquartered in Converse, Indiana.  First Farmers Bank & Trust has offices throughout Carroll, Cass, Clay, Grant, Hamilton, Howard, Huntington, Madison, Marshall, Miami, Starke, Sullivan, Tippecanoe, Tipton, Vigo and Wabash counties in Indiana and offices in Coles, Edgar, and Vermilion counties in Illinois. First Farmers Financial Corp is traded on the OTC Markets Group, Inc. “OTCQX” exchange under the ticker symbol: FFMR  

    The MIL Network

  • MIL-OSI: LIS Technologies Inc. Achieves TRL-4 in an Independent Technical Readiness Assessment of its Patented Laser Enrichment Technology (CRISLA)

    Source: GlobeNewswire (MIL-OSI)

    Oak Ridge, Tennessee, March 18, 2025 (GLOBE NEWSWIRE) — LIS Technologies Inc. (“LIST” or “the Company”), a proprietary developer of advanced laser technology and the only USA-origin and patented laser uranium enrichment company, today announced that it has convened a panel of independent reviewers to perform a Technology Readiness Level Assessment (TRA) of the CRISLA-3G technology at the LIST facility in Oak Ridge, TN during the week of March 11 – 13, 2025.

    The CRISLA-3G laser isotope separation technology was evaluated and determined to meet all elements required for TRL-4, conforming to the Department of Energy guide DOE G 413.3-4A. It shows that all critical components were successfully validated in a laboratory environment, supported by experimental results from the integrated system.

    The TRA Team leveraged a well-established TRA process developed and implemented by the U.S. Department of Energy (DOE) Office of Environmental Management (EM). Each Critical Technology Element (CTE) was assessed against TRL-4, TRL-5, and TRL-6 calculator elements, which address technical, manufacturing, and programmatic factors. The Technology Readiness Level (TRL) is a technology maturity system ranging from TRL-1 (basic principles observed and reported) to TRL-9 (actual system operated over full range of expected conditions). The TRL rating system was developed by NASA and DoD to evaluate the deployment readiness of a given technology and has been adopted by agencies across the federal government.

    “We are very pleased that the independent Technology Readiness Assessment team scored our TRL at 4, meeting 27 out of 27 criteria,” said Christo Liebenberg, CEO and Co-Founder of LIS Technologies Inc. “Also identified were the critical technical elements (CTE’s) to progress through TRL5, 6 and 7 in the coming years. We have high confidence that we can meet all these CTEs in our roadmap to commercialization.”

    “With our interaction with the TRL assessment team, I feel reassured that our technology is moving forward in the right direction,” said Viktor Chikan, Ph.D., Co-Chief Technical Officer of LIS Technologies Inc. “In my view, the TRL assessment provides the necessary transparency for both investors and the technical team to execute on the project plan and realize the commercial enrichment facility based on CRISLA technology.”

    “This is a very important milestone for the advancement of CRISLA technology,” said Keith Everly Head of Security and IP Management of LIS Technologies Inc. “I am pleased that our self-assessment of our progress with the CRISLA technology process is in good alignment with the assessment of a qualified independent board of reviewers.”

    “The Technology Readiness Level framework is essential for guiding innovative technologies toward full-scale commercialization,” said Jay Yu, Executive Chairman and President of LIS Technologies Inc. “This review of our patented CRISLA technology underscores the substantial progress LIST’s technical team has achieved in preparing the system for the demonstration activities required for TRL 5. Successfully completing those demonstration steps will be a major threshold in establishing our leadership in this space.”

    About LIS Technologies Inc.

    LIS Technologies Inc. (LIST) is a USA based, proprietary developer of a patented advanced laser technology, making use of infrared lasers to selectively excite the molecules of desired isotopes to separate them from other isotopes. The Laser Isotope Separation Technology (L.I.S.T) has a huge range of applications, including being the only USA-origin (and patented) laser uranium enrichment company, and several major advantages over traditional methods such as gas diffusion, centrifuges, and prior art laser enrichment. The LIST proprietary laser-based process is more energy-efficient and has the potential to be deployed with highly competitive capital and operational costs. L.I.S.T is optimized for LEU (Low Enriched Uranium) for existing civilian nuclear power plants, High-Assay LEU (HALEU) for the next generation of Small Modular Reactors (SMR) and Microreactors, the production of stable isotopes for medical and scientific research, and applications in quantum computing manufacturing for semiconductor technologies. The Company employs a world class nuclear technical team working alongside leading nuclear entrepreneurs and industry professionals, possessing strong relationships with government and private nuclear industries.

    In 2024, LIS Technologies Inc. (Laser Isotope Separation Technologies) was selected as one of six domestic companies by the U.S. Department of Energy (DOE) to participate in the Low-Enriched Uranium (LEU) Enrichment Acquisition Program. This initiative allocates up to $3.4 billion overall, with contracts lasting for up to 10 years. Each awardee is slated to receive a minimum contract of $2 million.

    For more information please visit: LaserIsTech.com

    For further information, please contact:
    Email: info@laseristech.com
    Telephone: 800-388-5492
    Follow us on X Platform
    Follow us on LinkedIn

    Forward Looking Statements

    This news release contains “forward-looking statements” within the meaning of Section 21E of the Securities Exchange Act of 1934, as amended, and the Private Securities Litigation Reform Act of 1995. In this context, forward-looking statements mean statements related to future events, which may impact our expected future business and financial performance, and often contain words such as “expects”, “anticipates”, “intends”, “plans”, “believes”, “will”, “should”, “could”, “would” or “may” and other words of similar meaning. These forward-looking statements are based on information available to us as of the date of this news release and represent management’s current views and assumptions. Forward-looking statements are not guarantees of future performance, events or results and involve known and unknown risks, uncertainties and other factors, which may be beyond our control. For LIS Technologies Inc., particular risks and uncertainties that could cause our actual future results to differ materially from those expressed in our forward-looking statements include but are not limited to the following which are, and will be, exacerbated by any worsening of global business and economic environment: (i) risks related to the development of new or advanced technology, including difficulties with design and testing, cost overruns, development of competitive technology, loss of key individuals and uncertainty of success of patent filing, (ii) our ability to obtain contracts and funding to be able to continue operations and (iii) risks related to uncertainty regarding our ability to commercially deploy a competitive laser enrichment technology, (iv) risks related to the impact of government regulation and policies including by the DOE and the U.S. Nuclear Regulatory Commission; and other risks and uncertainties discussed in this and our other filings with the SEC. Only after successful completion of our Phase 2 Pilot Plant demonstration will LIS Technologies be able to make realistic economic predictions for a Commercial Facility. Readers are cautioned not to place undue reliance on these forward-looking statements, which apply only as of the date of this news release. These factors may not constitute all factors that could cause actual results to differ from those discussed in any forward-looking statement. Accordingly, forward-looking statements should not be relied upon as a predictor of actual results. We do not undertake to update our forward-looking statements to reflect events or circumstances that may arise after the date of this news release, except as required by law.

    The MIL Network

  • MIL-OSI: NVIDIA Blackwell RTX PRO Comes to Workstations and Servers for Designers, Developers, Data Scientists and Creatives to Build and Collaborate With Agentic AI

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced the NVIDIA RTX PRO™ Blackwell series — a revolutionary generation of workstation and server GPUs redefining workflows for AI, technical, creative, engineering and design professionals with breakthrough accelerated computing, AI inference, ray tracing and neural rendering technologies.

    For everything from agentic AI, simulation, extended reality, 3D design and complex visual effects to developing physical AI powering autonomous robots, vehicles and smart spaces, the RTX PRO Blackwell series provides professionals across industries the latest and greatest compute power, memory capacity and data throughput right at their fingertips — from their desktop, on the go with mobile workstations or powered by data center GPUs.

    The new lineup includes:

    • Data center GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition
    • Desktop GPUs: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition, NVIDIA RTX PRO 5000 Blackwell, NVIDIA RTX PRO 4500 Blackwell and NVIDIA RTX PRO 4000 Blackwell
    • Laptop GPUs: NVIDIA RTX PRO 5000 Blackwell, NVIDIA RTX PRO 4000 Blackwell, NVIDIA RTX PRO 3000 Blackwell, NVIDIA RTX PRO 2000 Blackwell, NVIDIA RTX PRO 1000 Blackwell and NVIDIA RTX PRO 500 Blackwell

    “Software developers, data scientists, artists, designers and engineers need powerful AI and graphics performance to push the boundaries of visual computing and simulation, helping tackle incredible industry challenges,” said Bob Pette, vice president of enterprise platforms at NVIDIA. “Bringing NVIDIA Blackwell to workstations and servers will take productivity, performance and speed to new heights, accelerating AI inference serving, data science, visualization and content creation.”

    NVIDIA Blackwell Technology Comes to Workstations and Data Centers
    RTX PRO Blackwell GPUs unlock the potential of generative, agentic and physical AI by delivering exceptional performance, efficiency and scale.

    NVIDIA RTX PRO Blackwell GPUs feature:

    • NVIDIA Streaming Multiprocessor: Offers up to 1.5x faster throughput and new neural shaders that integrate AI inside of programmable shaders to drive the next decade of AI-augmented graphics innovations.
    • Fourth-Generation RT Cores: Delivers up to 2x the performance of the previous generation to create photoreal, physically accurate scenes and complex 3D designs with optimizations for NVIDIA RTX™ Mega Geometry.
    • Fifth-Generation Tensor Cores: Delivers up to 4,000 AI trillion operations per second and adds support for FP4 precision and NVIDIA DLSS 4 Multi Frame Generation, enabling a new era of AI-powered graphics and the ability to run and prototype larger AI models faster.
    • Larger, Faster GDDR7 Memory: Boosts bandwidth and capacity — up to 96GB for workstations and servers and up to 24GB on laptops. This enables applications to run faster and work with larger, more complex datasets for everything from tackling massive 3D and AI projects to exploring large-scale virtual reality environments.
    • Ninth-Generation NVIDIA NVENC: Accelerates video encoding speed and improves quality for professional video applications with added support for 4:2:2 encoding.
    • Sixth-Generation NVIDIA NVDEC: Provides up to double the H.264 decoding throughput and offers support for 4:2:2 H.264 and HEVC decode. Professionals can benefit from high-quality video playback, accelerate video data ingestion and use advanced AI-powered video editing features.
    • Fifth-Generation PCIe: Support for fifth-generation PCI Express provides double the bandwidth over the previous generation, improving data transfer speeds from CPU memory and unlocking faster performance for data-intensive tasks.
    • DisplayPort 2.1: Drives high-resolution displays at up to 4K at 480Hz and 8K at 165Hz. Increased bandwidth enables seamless multi-monitor setups, while high dynamic range and higher color depth support deliver more precise color accuracy for tasks like video editing, 3D design and live broadcasting.
    • Multi-Instance GPU (MIG): The RTX PRO 6000 data center and desktop GPUs and 5000 series desktop GPUs feature MIG technology, enabling secure partitioning of a single GPU into up to four instances (6000 series) or two instances (5000 series). Fault isolation is designed to prevent workload interference for secure, efficient resource allocation for diverse workloads, maximizing performance and flexibility.

    The new laptop GPUs also support the latest NVIDIA Blackwell Max-Q technologies, which intelligently and continually optimize laptop performance and power efficiency with AI.

    With neural rendering and AI-augmented tools, NVIDIA RTX PRO Blackwell GPUs enable the creation of stunning visuals, digital twins of real-world environments and immersive experiences with unprecedented speed and efficiency. The GPUs are built to elevate 3D computer-aided design and building information model workflows, offering designers and engineers exceptional performance for complex modeling, rendering and visualization.

    Designed for enterprise data center deployments, the RTX PRO 6000 Blackwell Server Edition features a passively cooled thermal design and can be configured with up to eight GPUs per server. For workloads that require the compute density and scale that data centers offer, the RTX PRO 6000 Blackwell Server Edition delivers powerful performance for next-generation AI, scientific and visual computing applications across industries such as healthcare, manufacturing, retail and media and entertainment.

    In addition, this powerful data center GPU can be combined with NVIDIA vGPU™ software to power AI workloads across virtualized environments and deliver high-performance virtual workstation instances to remote users. NVIDIA vGPU support for the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU is expected in the latter half of this year.

    “Foster + Partners has tested the NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU on Cyclops, our GPU-based ray-tracing product,” said Martha Tsigkari, head of applied research and development and senior partner at Foster + Partners. “The new NVIDIA Blackwell GPU has managed to outperform everything we have tested before. For example, when using it with Cyclops, it has performed at 5x the speed of NVIDIA RTX A6000 GPUs. Rendering speeds also increased 5x, allowing tools like Cyclops to provide feedback on how well our design solutions perform in real time as we design them and resulting in intuitive yet informed decision-making from early conceptual stages.”

    “Early evaluation of the RTX PRO 6000 Blackwell technology by GE HealthCare’s engineering team has found the potential for up to 2x GPU processing time improvement on reconstruction algorithms, which could lead to significant benefit to customers,” said Rekha Ranganathan, senior executive and general manager of platforms and digital solutions at GE HealthCare.

    “NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPUs enable incredibly sharp and photorealistic graphics,” said Jeff Hammoud, chief design officer at Rivian. “In conjunction with a Varjo XR4 headset and Autodesk VRED, the system delivered the level of crispness necessary for immersive automotive design reviews. With NVIDIA Blackwell support for PCIe Gen 5, we used two powerful 600W GPUs via VR SLI, allowing us to achieve the highest pixel density and the most stunning visuals we have ever experienced in VR.”

    “The 96GB memory and massive AI processing power in the NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU has boosted our productivity up to 3x with AI models like Llama 3.3-70B and Mixtral 8x7b, the NVIDIA Omniverse platform and industrial copilots,” said Shaun Greene, director of industry solutions at SoftServe. “We’ve seen immediate performance improvements and, using workstations, can now handle AI workloads that were previously only possible in the cloud or on rack servers — unlocking new possibilities for interactive demos and production workloads in retail, manufacturing and industrial edge applications.”

    RTX PRO GPUs run on the NVIDIA AI platform and feature larger memory capacity and the latest Tensor Cores to accelerate a deep ecosystem of AI-accelerated applications built on NVIDIA CUDA® and RTX technology. With everything from the latest AI-based content creation tools and new reasoning models, such as the NVIDIA Llama Nemotron Reason family of models and NVIDIA NIM™ microservices unveiled today, inferencing is faster than ever. And with over 400 NVIDIA CUDA-X™ libraries, developers can easily build, optimize, deploy and scale new AI applications, from workstations to the data center or cloud.

    Enterprises can fast-track their AI development and deployments by prototyping locally with an NVIDIA RTX PRO GPU and the NVIDIA Omniverse™ and NVIDIA AI Enterprise platforms, NVIDIA Blueprints and NVIDIA NIM, which gives access to easy-to-use inference microservices backed by enterprise-level support. They can also run these applications at scale on the ultimate universal data center GPU for AI and visual computing, delivering breakthrough acceleration for the most demanding compute-intensive enterprise workloads with the RTX PRO 6000 Blackwell Server Edition.

    Availability
    The NVIDIA RTX PRO 6000 Blackwell Server Edition will soon be available in server configurations from leading data center system partners including Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.

    Cloud service providers and GPU cloud providers including AWS, Google Cloud, Microsoft Azure and CoreWeave will be among the first to offer instances powered by the NVIDIA RTX PRO 6000 Blackwell Server Edition later this year. In addition, the server edition GPU will be available in data center platforms from ASUS, GIGABYTE, Ingrasys, Quanta Cloud Technology (QCT) and other global system partners.

    The NVIDIA RTX PRO 6000 Blackwell Workstation Edition and NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition will be available through global distribution partners such as PNY and TD SYNNEX starting in April, with availability from manufacturers, such as BOXX, Dell, HP Inc., Lambda and Lenovo, starting in May.

    The NVIDIA RTX PRO 5000, RTX PRO 4500 and RTX PRO 4000 Blackwell GPUs will be available in the summer from BOXX, Dell, HP and Lenovo and through global distribution partners.

    NVIDIA RTX PRO Blackwell laptop GPUs will be available from Dell, HP, Lenovo and Razer starting later this year.

    To learn more about the NVIDIA RTX PRO Blackwell GPUs, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through March 21. Plus, explore extended-reality demos running on RTX PRO Blackwell GPUs at the XR Pavilion at The Tech Interactive museum.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Pearlina Boc
    NVIDIA Corporation
    +1-562-275-5781
    pboc@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; and bringing Blackwell to workstations and servers taking productivity, performance and speed to new heights, accelerating AI inference serving, data science, visualization and content creation are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, CUDA, CUDA-X, NVIDIA NIM, NVIDIA Omniverse, NVIDIA RTX, NVIDIA RTX PRO and vGPU are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at:
    https://www.globenewswire.com/NewsRoom/AttachmentNg/155918f9-2121-4220-9f20-6b968e34a460

    The MIL Network

  • MIL-OSI: NVIDIA Announces Isaac GR00T N1 — the World’s First Open Humanoid Robot Foundation Model — and Simulation Frameworks to Speed Robot Development

    Source: GlobeNewswire (MIL-OSI)

    • Now Available, Fully Customizable Foundation Model Brings Generalized Skills and Reasoning to Humanoid Robots
    • NVIDIA, Google DeepMind and Disney Research Collaborate to Develop Next-Generation Open-Source Newton Physics Engine
    • New Omniverse Blueprint for Synthetic Data Generation and Open-Source Dataset Jumpstart Physical AI Data Flywheel

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced a portfolio of technologies to supercharge humanoid robot development, including NVIDIA Isaac GR00T N1, the world’s first open, fully customizable foundation model for generalized humanoid reasoning and skills.

    The other technologies include simulation frameworks and blueprints such as the NVIDIA Isaac GR00T Blueprint for generating synthetic data, as well as Newton, an open-source physics engine — under development with Google DeepMind and Disney Research — purpose-built for developing robots.

    Available now, GR00T N1 is the first of a family of fully customizable models that NVIDIA will pretrain and release to worldwide robotics developers — accelerating the transformation of industries challenged by global labor shortages estimated at more than 50 million people.

    “The age of generalist robotics is here,” said Jensen Huang, founder and CEO of NVIDIA. “With NVIDIA Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.”

    GR00T N1 Advances Humanoid Developer Community
    The GR00T N1 foundation model features a dual-system architecture, inspired by principles of human cognition. “System 1” is a fast-thinking action model, mirroring human reflexes or intuition. “System 2” is a slow-thinking model for deliberate, methodical decision-making.

    Powered by a vision language model, System 2 reasons about its environment and the instructions it has received to plan actions. System 1 then translates these plans into precise, continuous robot movements. System 1 is trained on human demonstration data and a massive amount of synthetic data generated by the NVIDIA Omniverse™ platform.

    GR00T N1 can easily generalize across common tasks — such as grasping, moving objects with one or both arms, and transferring items from one arm to another — or perform multistep tasks that require long context and combinations of general skills. These capabilities can be applied across use cases such as material handling, packaging and inspection.

    Developers and researchers can post-train GR00T N1 with real or synthetic data for their specific humanoid robot or task.

    In his GTC keynote, Huang demonstrated 1X’s humanoid robot autonomously performing domestic tidying tasks using a post-trained policy built on GR00T N1. The robot’s autonomous capabilities are the result of an AI training collaboration between 1X and NVIDIA.

    “The future of humanoids is about adaptability and learning,” said Bernt Børnich, CEO of 1X Technologies. “NVIDIA’s GR00T N1 model provides a major breakthrough for robot reasoning and skills. With a minimal amount of post-training data, we were able to fully deploy on NEO Gamma — furthering our mission of creating robots that are not tools, but companions that can assist humans in meaningful, immeasurable ways.”

    Among the additional leading humanoid developers worldwide with early access to GR00T N1 are Agility Robotics, Boston Dynamics, Mentee Robotics and NEURA Robotics.

    NVIDIA, Google DeepMind and Disney Research Focus on Physics
    NVIDIA announced a collaboration with Google DeepMind and Disney Research to develop Newton, an open-source physics engine that lets robots learn how to handle complex tasks with greater precision.

    Built on the NVIDIA Warp framework, Newton will be optimized for robot learning and compatible with simulation frameworks such as Google DeepMind’s MuJoCo and NVIDIA Isaac™ Lab. Additionally, the three companies plan to enable Newton to use Disney’s physics engine.

    Google DeepMind and NVIDIA are collaborating to develop MuJoCo-Warp, which is expected to accelerate robotics machine learning workloads by more than 70x and will be available to developers through Google DeepMind’s MJX open-source library, as well as through Newton.

    Disney Research will be one of the first to use Newton to advance its robotic character platform that powers next-generation entertainment robots, such as the expressive Star Wars-inspired BDX droids that joined Huang on stage during his GTC keynote.

    “The BDX droids are just the beginning. We’re committed to bringing more characters to life in ways the world hasn’t seen before, and this collaboration with Disney Research, NVIDIA and Google DeepMind is a key part of that vision,” said Kyle Laughlin, senior vice president at Walt Disney Imagineering Research & Development. “This collaboration will allow us to create a new generation of robotic characters that are more expressive and engaging than ever before — and connect with our guests in ways that only Disney can.”

    NVIDIA and Disney Research, along with Intrinsic, announced an additional collaboration to build OpenUSD pipelines and best practices for robotics data workflows.

    More Data to Advance Robotics Post-Training
    Large, diverse, high-quality datasets are critical for robot development but costly to capture. For humanoids, real-world human demonstration data is limited by a person’s 24-hour day.

    Announced today, the NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation helps address this challenge. Built on Omniverse and NVIDIA Cosmos Transfer world foundation models, the blueprint lets developers generate exponentially large amounts of synthetic motion data for manipulation tasks from a small number of human demonstrations.

    Using the first components available for the blueprint, NVIDIA generated 780,000 synthetic trajectories — the equivalent of 6,500 hours, or nine continuous months, of human demonstration data — in just 11 hours. Then, combining the synthetic data with real data, NVIDIA improved GR00T N1’s performance by 40%, compared with using only real data.

    To further equip the developer community with valuable training data, NVIDIA is releasing the GR00T N1 dataset as part of a larger open-source physical AI dataset — also announced at GTC and now available on Hugging Face.

    Availability
    NVIDIA GR00T N1 training data and task evaluation scenarios are now available for download from Hugging Face and GitHub. The NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation is also now available as an interactive demo on build.nvidia.com or to download from GitHub.

    The NVIDIA DGX Spark personal AI supercomputer, also announced today at GTC, provides developers a turnkey system to expand GR00T N1’s capabilities for new robots, tasks and environments without extensive custom programming.

    The Newton physics engine is expected to be available later this year.

    Learn more by watching the NVIDIA GTC keynote and register to attend key Humanoid Developer Day sessions, including:

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Kristin Bryson
    Enterprise Communications
    NVIDIA Corporation
    +1-203-241-9190
    kbryson@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; NVIDIA’s collaborations with third parties; third parties adopting or offering NVIDIA’s products and technologies; and with NVIDIA Isaac GR00T N1 and new data generation and robot-learning frameworks, robotics developers everywhere opening the next frontier in the age of AI are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Omniverse and NVIDIA Isaac are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/65cf6342-e940-44c0-a40c-dfe09ac433c9

    The MIL Network

  • MIL-OSI: NVIDIA Launches Family of Open Reasoning AI Models for Developers and Enterprises to Build Agentic AI Platforms

    Source: GlobeNewswire (MIL-OSI)

    • Post-Trained by NVIDIA, New Llama Nemotron Reasoning Models Provide Business-Ready Foundation for Agentic AI
    • Accenture, Amdocs, Atlassian, Box, Cadence, CrowdStrike, Deloitte, IQVIA, Microsoft, SAP and ServiceNow Pioneering Reasoning AI Agents With NVIDIA to Transform Work

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today announced the open Llama Nemotron family of models with reasoning capabilities, designed to provide developers and enterprises a business-ready foundation for creating advanced AI agents that can work independently or as connected teams to solve complex tasks.

    Built on Llama models, the NVIDIA Llama Nemotron reasoning family delivers on-demand AI reasoning capabilities. NVIDIA enhanced the new reasoning model family during post-training to improve multistep math, coding, reasoning and complex decision-making.

    This refinement process boosts accuracy of the models by up to 20% compared with the base model and optimizes inference speed by 5x compared with other leading open reasoning models. The improvements in inference performance mean the models can handle more complex reasoning tasks, enhance decision-making capabilities and reduce operational costs for enterprises.

    Leading agent AI platform pioneers — including Accenture, Amdocs, Atlassian, Box, Cadence, CrowdStrike, Deloitte, IQVIA, Microsoft, SAP and ServiceNow — are collaborating with NVIDIA on its new reasoning models and software.

    “Reasoning and agentic AI adoption is incredible,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s open reasoning models, software and tools give developers and enterprises everywhere the building blocks to create an accelerated agentic AI workforce.”

    NVIDIA Post-Training Boosts Accuracy and Reliability for Enterprise Reasoning
    Built to deliver production-ready AI reasoning, the Llama Nemotron model family is available as NVIDIA NIM™ microservices in Nano, Super and Ultra sizes — each optimized for different deployment needs.

    The Nano model delivers the highest accuracy on PCs and edge devices, the Super model offers the best accuracy and highest throughput on a single GPU, and the Ultra model will provide maximum agentic accuracy on multi-GPU servers.

    NVIDIA conducted extensive post-training on NVIDIA DGX™ Cloud using high-quality curated synthetic data generated by NVIDIA Nemotron™ and other open models, as well as additional curated datasets cocreated by NVIDIA.

    The tools, datasets and post-training optimization techniques used to develop the models will be openly available, giving enterprises the flexibility to build their own custom reasoning models.

    Agentic Platforms Team With NVIDIA to Enhance Reasoning for Industries
    Agentic AI platform industry leaders are working with the Llama Nemotron reasoning models to deliver advanced reasoning to enterprises.

    Microsoft is integrating Llama Nemotron reasoning models and NIM microservices into Microsoft Azure AI Foundry. This expands the Azure AI Foundry model catalog with options for customers to enhance services like Azure AI Agent Service for Microsoft 365.

    SAP is tapping Llama Nemotron models to advance SAP Business AI solutions and Joule, the AI copilot from SAP. Additionally, it is using NVIDIA NIM and NVIDIA NeMo™ microservices to promote increased code completion accuracy for SAP ABAP programming language models.

    “We are collaborating with NVIDIA to integrate Llama Nemotron reasoning models into Joule to enhance our AI agents, making them more intuitive, accurate and cost effective,” said Walter Sun, global head of AI at SAP. “These advanced reasoning models will refine and rewrite user queries, enabling our AI to better understand inquiries and deliver smarter, more efficient AI-powered experiences that drive business innovation.”

    ServiceNow is harnessing Llama Nemotron models to build AI agents that offer greater performance and accuracy to enhance enterprise productivity across industries.

    Accenture has made NVIDIA Llama Nemotron reasoning models available on its AI Refinery platform — including new industry agent solutions announced today — to enable clients to rapidly develop and deploy custom AI agents tailored to industry-specific challenges, accelerating business transformation.

    Deloitte is planning to incorporate Llama Nemotron reasoning models into its recently announced Zora AI agentic AI platform designed to support and emulate human decision-making and action with agents that include deep functional- and industry-specific business knowledge and built-in transparency.

    NVIDIA AI Enterprise Delivers Essential Tools for Agentic AI
    Developers can deploy NVIDIA Llama Nemotron reasoning models with new NVIDIA agentic AI tools and software to streamline the adoption of advanced reasoning in collaborative AI systems.

    All part of the NVIDIA AI Enterprise software platform, the latest agentic AI building blocks include:

    • The NVIDIA AI-Q Blueprint, which enables enterprises to connect knowledge to AI agents that can autonomously perceive, reason and act. Built with NVIDIA NIM microservices, the blueprint integrates NVIDIA NeMo Retriever™ for multimodal information retrieval and enables agent and data connections, optimization and transparency using the open-source NVIDIA AgentIQ toolkit.
    • The NVIDIA AI Data Platform, a customizable reference design for a new class of enterprise infrastructure with AI query agents built with the AI-Q Blueprint.
    • New NVIDIA NIM microservices, which optimize inference for complex agentic AI applications and enable continuous learning and real-time adaptation across any environment. The microservices ensure reliable deployment of the latest models from leading model builders including Meta, Microsoft and Mistral AI.
    • NVIDIA NeMo microservices, which provide an efficient, enterprise-grade solution to quickly establish and maintain a robust data flywheel that enables AI agents to continuously learn from human- and AI-generated feedback. The NVIDIA AI Blueprint for building a data flywheel will offer a reference architecture for developers to easily build and optimize data flywheels using NVIDIA microservices.

    Availability
    The NVIDIA Llama Nemotron Nano and Super models and NIM microservices are available as a hosted application programming interface from build.nvidia.com and Hugging Face. Access for development, testing and research is free for members of the NVIDIA Developer Program.

    Enterprises can run Llama Nemotron NIM microservices in production with NVIDIA AI Enterprise on accelerated data center and cloud infrastructure. Developers can sign up to be notified when NVIDIA NeMo microservices are publicly available.

    The NVIDIA AI-Q Blueprint is expected to be available in April. The NVIDIA AgentIQ toolkit is available now on GitHub.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Anna Kiachian
    NVIDIA Corporation
    +1-650-224-9820
    akiachian@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; NVIDIA’s open reasoning models, software and tools giving developers and enterprises everywhere the building blocks to create an accelerated agentic AI workforce are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX, NVIDIA NeMo, NVIDIA Nemotron, NVIDIA NeMo Retriever and NVIDIA NIM are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/6b111210-07b7-4296-83fa-8c18c9acfbfc

    The MIL Network

  • MIL-OSI: NVIDIA Dynamo Open-Source Library Accelerates and Scales AI Reasoning Models

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled NVIDIA Dynamo, an open-source inference software for accelerating and scaling AI reasoning models in AI factories at the lowest cost and with the highest efficiency.

    Efficiently orchestrating and coordinating AI inference requests across a large fleet of GPUs is crucial to ensuring that AI factories run at the lowest possible cost to maximize token revenue generation.

    As AI reasoning goes mainstream, every AI model will generate tens of thousands of tokens used to “think” with every prompt. Increasing inference performance while continually lowering the cost of inference accelerates growth and boosts revenue opportunities for service providers.

    NVIDIA Dynamo, the successor to NVIDIA Triton Inference Server™, is new AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models. It orchestrates and accelerates inference communication across thousands of GPUs, and uses disaggregated serving to separate the processing and generation phases of large language models (LLMs) on different GPUs. This allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization.

    “Industries around the world are training AI models to think and learn in different ways, making them more sophisticated over time,” said Jensen Huang, founder and CEO of NVIDIA. “To enable a future of custom reasoning AI, NVIDIA Dynamo helps serve these models at scale, driving cost savings and efficiencies across AI factories.”

    Using the same number of GPUs, Dynamo doubles the performance and revenue of AI factories serving Llama models on today’s NVIDIA Hopper™ platform. When running the DeepSeek-R1 model on a large cluster of GB200 NVL72 racks, NVIDIA Dynamo’s intelligent inference optimizations also boost the number of tokens generated by over 30x per GPU.

    To achieve these inference performance improvements, NVIDIA Dynamo incorporates features that enable it to increase throughput and reduce costs. It can dynamically add, remove and reallocate GPUs in response to fluctuating request volumes and types, as well as pinpoint specific GPUs in large clusters that can minimize response computations and route queries. It can also offload inference data to more affordable memory and storage devices and quickly retrieve them when needed, minimizing inference costs.

    NVIDIA Dynamo is fully open source and supports PyTorch, SGLang, NVIDIA TensorRT™-LLM and vLLM to allow enterprises, startups and researchers to develop and optimize ways to serve AI models across disaggregated inference. It will enable users to accelerate the adoption of AI inference, including at AWS, Cohere, CoreWeave, Dell, Fireworks, Google Cloud, Lambda, Meta, Microsoft Azure, Nebius, NetApp, OCI, Perplexity, Together AI and VAST. 

    Inference Supercharged
    NVIDIA Dynamo maps the knowledge that inference systems hold in memory from serving prior requests — known as KV cache — across potentially thousands of GPUs.

    It then routes new inference requests to the GPUs that have the best knowledge match, avoiding costly recomputations and freeing up GPUs to respond to new incoming requests.

    “To handle hundreds of millions of requests monthly, we rely on NVIDIA GPUs and inference software to deliver the performance, reliability and scale our business and users demand,” said Denis Yarats, chief technology officer of Perplexity AI. “We look forward to leveraging Dynamo, with its enhanced distributed serving capabilities, to drive even more inference-serving efficiencies and meet the compute demands of new AI reasoning models.”

    Agentic AI
    AI provider Cohere is planning to power agentic AI capabilities in its Command series of models using NVIDIA Dynamo.

    “Scaling advanced AI models requires sophisticated multi-GPU scheduling, seamless coordination and low-latency communication libraries that transfer reasoning contexts seamlessly across memory and storage,” said Saurabh Baji, senior vice president of engineering at Cohere. “We expect NVIDIA Dynamo will help us deliver a premier user experience to our enterprise customers.”

    Disaggregated Serving
    The NVIDIA Dynamo inference platform also supports disaggregated serving, which assigns the different computational phases of LLMs — including building an understanding of the user query and then generating the best response — to different GPUs. This approach is ideal for reasoning models like the new NVIDIA Llama Nemotron model family, which uses advanced inference techniques for improved contextual understanding and response generation. Disaggregated serving allows each phase to be fine-tuned and resourced independently, improving throughput and delivering faster responses to users.

    Together AI, the AI Acceleration Cloud, is looking to integrate its proprietary Together Inference Engine with NVIDIA Dynamo to enable seamless scaling of inference workloads across GPU nodes. This also lets Together AI dynamically address traffic bottlenecks at various stages of the model pipeline.

    “Scaling reasoning models cost effectively requires new advanced inference techniques, including disaggregated serving and context-aware routing,” said Ce Zhang, chief technology officer of Together AI. “Together AI provides industry-leading performance using our proprietary inference engine. The openness and modularity of NVIDIA Dynamo will allow us to seamlessly plug its components into our engine to serve more requests while optimizing resource utilization — maximizing our accelerated computing investment. We’re excited to leverage the platform’s breakthrough capabilities to cost-effectively bring open-source reasoning models to our users.”

    NVIDIA Dynamo Unpacked
    NVIDIA Dynamo includes four key innovations that reduce inference serving costs and improve user experience:

    • GPU Planner: A planning engine that dynamically adds and removes GPUs to adjust to fluctuating user demand, avoiding GPU over- or under-provisioning.
    • Smart Router: An LLM-aware router that directs requests across large GPU fleets to minimize costly GPU recomputations of repeat or overlapping requests — freeing up GPUs to respond to new incoming requests.
    • Low-Latency Communication Library: An inference-optimized library that supports state-of-the-art GPU-to-GPU communication and abstracts complexity of data exchange across heterogenous devices, accelerating data transfer.
    • Memory Manager: An engine that intelligently offloads and reloads inference data to and from lower-cost memory and storage devices without impacting user experience. 

    NVIDIA Dynamo will be made available in NVIDIA NIM™ microservices and supported in a future release by the NVIDIA AI Enterprise software platform with production-grade security, support and stability.

    Learn more by watching the NVIDIA GTC keynote, reading this blog on Dynamo and registering for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Cliff Edwards
    NVIDIA Corporation
    +1-415-699-2755
    cliffe@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; industries around the world training AI models to think and learn in different ways, making them more sophisticated over time; and to enable a future of custom reasoning AI, NVIDIA Dynamo helping serve these models at scale, driving cost savings and efficiencies across AI factories are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Hopper, NVIDIA NIM, NVIDIA Triton Inference Server and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/e82546dd-6224-4ebb-8d5a-3476d18e97d0

    The MIL Network

  • MIL-OSI: NexusX Achieves Highest Level Compliance Certification from the Asia-Pacific Financial Alliance (APFA), Setting a New Benchmark for Global Digital Asset Trading

    Source: GlobeNewswire (MIL-OSI)

    Los Angeles, CA, March 18, 2025 (GLOBE NEWSWIRE) — Global leading cryptocurrency exchange NexusX today announced that it has officially received the “AAA Digital Asset Service Provider” certification from the Asia-Pacific Financial Alliance (APFA). This makes it the first digital asset trading platform in the world to meet the top three standards for anti-money laundering (AML), user asset segregation, and operational transparency. This certification further solidifies NexusX’s position as a top-tier compliant exchange globally, providing users with a safer, more transparent, and compliant digital asset trading environment.

    NexusX: A Global Leader in Compliant Cryptocurrency Trading

    NexusX is a cryptocurrency exchange registered in the United States and holds a FinCEN MSB license. It is dedicated to providing secure, efficient, and compliant cryptocurrency trading services to users worldwide. Recognized by international financial regulatory bodies, NexusX employs top-tier security technologies, AI-driven risk control systems, and global liquidity support to offer a diverse range of financial products, including spot trading, futures trading, DeFi trading, and NFT trading.

    Achieving the APFA certification further demonstrates NexusX’s industry-leading position in financial compliance, security regulation, and user asset protection.

    NexusX Enhances Trading Security Through APFA Certification

    APFA is one of the most authoritative financial regulatory organizations in the Asia-Pacific region, and its “AAA Digital Asset Service Provider” certification is considered the highest compliance standard globally. According to the compliance audit report released by APFA, NexusX excels in the following areas:

     – Cold wallet reserve coverage rate of 102%, ensuring complete asset segregation and protection against hacking and fund misappropriation risks.

    – All fiat assets are held in partner banks regulated by the International Banking Association (IBA), ensuring the safety and compliance of fiat funds.

    – An intelligent anti-money laundering (AML) system that covers 20 countries, capable of automatically monitoring and blocking suspicious transactions, significantly enhancing platform security.

    – Transparent and verifiable operational data, with all transaction data synchronized in real-time to financial regulatory agencies in various countries, ensuring legality and compliance.

    “Compliance is the cornerstone of global service,” said Jonathan Reynolds, CEO of NexusX, at the press conference. “NexusX has successfully integrated regulatory interfaces from 20 countries through our self-developed regulatory sandbox system, achieving real-time compliance for trading data.” This means that both individual users and institutional investors can enjoy bank-level security and transparency when trading on NexusX.

    NexusX Achieves 95% Retention Rate Among Institutional Investors, Becoming a Trusted Exchange

    In the context of global regulatory compliance, NexusX’s market performance continues to rise. According to the latest disclosures from the internationally recognized auditing firm VeriTrust:

    – In Q2 2025, NexusX’s trading volume in the global compliant market reached 38%, far exceeding the industry average.

    – NexusX boasts a retention rate of 95% among institutional investors, making it one of the most trusted digital currency trading platforms by institutions.

    – Daily trading volume has significantly increased, with global users surpassing 15 million, making it one of the fastest-growing digital asset trading platforms worldwide.

    Industry analysts believe that NexusX, as the safest and most compliant cryptocurrency exchange globally, is attracting an increasing number of Wall Street investment banks, hedge funds, and sovereign funds to enter the crypto market due to its robust compliance system, advanced trading technology, and solid market performance.

    NexusX’s Future Development Strategy: Building the Safest Digital Asset Trading Ecosystem

    As NexusX rapidly develops in the global market, the platform will continue to strengthen its compliance framework and promote the legitimization of the global digital asset market:

    – Expanding Global Compliance Licenses: Plans to apply for higher-level digital asset trading licenses in key markets such as the EU, Japan, Singapore, UAE, and Australia.

    – Upgrading AI Trading Risk Control Systems: Utilizing artificial intelligence and big data analytics to optimize trading security and reduce market manipulation risks.

    – Launching Institutional-Level Compliance Services: Collaborating with top international legal teams and auditing firms to attract more large financial institutions, family offices, and fund companies into the NexusX ecosystem.

    – Enhancing On-Chain Asset Management: Using smart contracts and transparent on-chain ledgers to ensure all transactions are verifiable, traceable, and auditable, completely eliminating malicious manipulation.

    Industry experts point out that NexusX’s APFA certification signifies its compliance capabilities equivalent to traditional financial institutions, positioning NexusX to become the most trusted trading platform in the global digital asset trading market. This certification not only boosts confidence among global investors but also drives the entire industry toward a more compliant, transparent, and secure future.

    Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities.

    Website: https://trade.nexusxing.com

    The MIL Network

  • MIL-OSI: NVIDIA Blackwell Ultra AI Factory Platform Paves Way for Age of AI Reasoning

    Source: GlobeNewswire (MIL-OSI)

    • Top Computer Makers, Cloud Service Providers and GPU Cloud Providers to Boost Training and Test-Time Scaling Inference, From Reasoning to Agentic and Physical AI
    • New Open-Source NVIDIA Dynamo Inference Software to Scale Up Reasoning AI Services With Leaps in Throughput, Faster Response Time and Reduced Total Cost of Ownership
    • NVIDIA Spectrum-X Enhanced 800G Ethernet Networking for AI Infrastructure Significantly Reduces Latency and Jitter

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — NVIDIA today announced the next evolution of the NVIDIA Blackwell AI factory platform, NVIDIA Blackwell Ultra — paving the way for the age of AI reasoning.

    NVIDIA Blackwell Ultra boosts training and test-time scaling inference — the art of applying more compute during inference to improve accuracy — to enable organizations everywhere to accelerate applications such as AI reasoning, agentic AI and physical AI.

    Built on the groundbreaking Blackwell architecture introduced a year ago, Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX™ B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper™.

    “AI has made a giant leap — reasoning and agentic AI demand orders of magnitude more computing performance,” said Jensen Huang, founder and CEO of NVIDIA. “We designed Blackwell Ultra for this moment — it’s a single versatile platform that can easily and efficiently do pretraining, post-training and reasoning AI inference.”

    NVIDIA Blackwell Ultra Enables AI Reasoning
    The NVIDIA GB300 NVL72 connects 72 Blackwell Ultra GPUs and 36 Arm
    Neoverse-based NVIDIA Grace™ CPUs in a rack-scale design, acting as a single massive GPU built for test-time scaling. With the NVIDIA GB300 NVL72, AI models can access the platform’s increased compute capacity to explore different solutions to problems and break down complex requests into multiple steps, resulting in higher-quality responses.

    GB300 NVL72 is also expected to be available on NVIDIA DGX™ Cloud, an end-to-end, fully managed AI platform on leading clouds that optimizes performance with software, services and AI expertise for evolving workloads. NVIDIA DGX SuperPOD™ with DGX GB300 systems uses the GB300 NVL72 rack design to provide customers with a turnkey AI factory.

    The NVIDIA HGX B300 NVL16 features 11x faster inference on large language models, 7x more compute and 4x larger memory compared with the Hopper generation to deliver breakthrough performance for the most complex workloads, such as AI reasoning.

    In addition, the Blackwell Ultra platform is ideal for applications including:

    • Agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multistep problems. AI agent systems go beyond instruction-following. They can reason, plan and take actions to achieve specific goals.
    • Physical AI, enabling companies to generate synthetic, photorealistic videos in real time for the training of applications such as robots and autonomous vehicles at scale.

    NVIDIA Scale-Out Infrastructure for Optimal Performance
    Advanced scale-out networking is a critical component of AI infrastructure that can deliver top performance while reducing latency and jitter.

    Blackwell Ultra systems seamlessly integrate with the NVIDIA Spectrum-X™ Ethernet and NVIDIA Quantum-X800 InfiniBand platforms, with 800 Gb/s of data throughput available for each GPU in the system, through an NVIDIA ConnectX®-8 SuperNIC. This delivers best-in-class remote direct memory access capabilities to enable AI factories and cloud data centers to handle AI reasoning models without bottlenecks.

    NVIDIA BlueField®-3 DPUs, also featured in Blackwell Ultra systems, enable multi-tenant networking, GPU compute elasticity, accelerated data access and real-time cybersecurity threat detection.

    Global Technology Leaders Embrace Blackwell Ultra
    Blackwell Ultra-based products are expected to be available from partners starting from the second half of 2025.

    Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers based on Blackwell Ultra products, in addition to Aivres, ASRock Rack, ASUS, Eviden, Foxconn, GIGABYTE, Inventec, Pegatron, Quanta Cloud Technology (QCT), Wistron and Wiwynn.

    Cloud service providers Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure and GPU cloud providers CoreWeave, Crusoe, Lambda, Nebius, Nscale, Yotta and YTL will be among the first to offer Blackwell Ultra-powered instances.

    NVIDIA Software Innovations Reduce AI Bottlenecks
    The entire NVIDIA Blackwell product portfolio is supported by the full-stack NVIDIA AI platform. The NVIDIA Dynamo open-source inference framework — also announced today — scales up reasoning AI services, delivering leaps in throughput while reducing response times and model serving costs by providing the most efficient solution for scaling test-time compute.

    NVIDIA Dynamo is new AI inference-serving software designed to maximize token revenue generation for AI factories deploying reasoning AI models. It orchestrates and accelerates inference communication across thousands of GPUs, and uses disaggregated serving to separate the processing and generation phases of large language models on different GPUs. This allows each phase to be optimized independently for its specific needs and ensures maximum GPU resource utilization.

    Blackwell systems are ideal for running new NVIDIA Llama Nemotron Reason models and the NVIDIA AI-Q Blueprint, supported in the NVIDIA AI Enterprise software platform for production-grade AI. NVIDIA AI Enterprise includes NVIDIA NIM microservices, as well as AI frameworks, libraries and tools that enterprises can deploy on NVIDIA-accelerated clouds, data centers and workstations.

    The Blackwell platform builds on NVIDIA’s ecosystem of powerful development tools, NVIDIA CUDA-X libraries, over 6 million developers and 4,000+ applications scaling performance across thousands of GPUs.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Kristin Uchiyama
    NVIDIA Corporation
    +1-408-313-0448
    kuchiyama@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; Blackwell Ultra being able to easily and efficiently do pretraining, post-training and reasoning AI inference; and advanced networking being a critical component of AI infrastructure that can deliver top performance while reducing latency and jitter are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, BlueField, Connect-X, CUDA-X, NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Grace, NVIDIA HGX, NVIDIA Hopper, NVIDIA NIM and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/7bb5b0bf-daad-41dc-8d0f-d1706984d616

    The MIL Network

  • MIL-OSI: NVIDIA Announces Spectrum-X Photonics, Co-Packaged Optics Networking Switches to Scale AI Factories to Millions of GPUs

    Source: GlobeNewswire (MIL-OSI)

    • 1.6 Terabits Per Second Per Port Switches to Deliver 3.5x Energy Savings and 10x Resilience in AI Factories
    • Joint Inventions and Collaborations With TSMC, Coherent, Corning Incorporated, Foxconn, Lumentum and SENKO to Create Integrated Silicon, Optics Process and Supply Chain

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled NVIDIA Spectrum-X™ and NVIDIA Quantum-X silicon photonics networking switches, which enable AI factories to connect millions of GPUs across sites while drastically reducing energy consumption and operational costs. NVIDIA has achieved the fusion of electronic circuits and optical communications at massive scale.

    As AI factories grow to unprecedented sizes, networks must evolve to keep pace. NVIDIA photonics switches are the world’s most advanced networking solution. They integrate optics innovations with 4x fewer lasers to deliver 3.5x more power efficiency, 63x greater signal integrity, 10x better network resiliency at scale and 1.3x faster deployment compared with traditional methods.

    “AI factories are a new class of data centers with extreme scale, and networking infrastructure must be reinvented to keep pace,” said Jensen Huang, founder and CEO of NVIDIA. “By integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories.”

    NVIDIA silicon photonics networking switches are available as part of the NVIDIA Spectrum-X Photonics Ethernet and NVIDIA Quantum-X Photonics InfiniBand platforms.

    The Spectrum-X Ethernet networking platform delivers superior performance and 1.6x bandwidth density compared with traditional Ethernet for multi-tenant, hyperscale AI factories, including the largest supercomputer in the world.

    NVIDIA Spectrum-X Photonics switches include multiple configurations, including 128 ports of 800Gb/s or 512 ports of 200Gb/s, delivering 100Tb/s total bandwidth, as well as 512 ports of 800Gb/s or 2,048 ports of 200Gb/s, for a total throughput of 400Tb/s.

    NVIDIA Quantum-X Photonics switches provide 144 ports of 800Gb/s InfiniBand based on 200Gb/s SerDes and use a liquid-cooled design to efficiently cool the onboard silicon photonics. NVIDIA Quantum-X Photonics switches offer 2x faster speeds and 5x higher scalability for AI compute fabrics compared with the previous generation.

    A Networked Ecosystem
    NVIDIA’s silicon photonics ecosystem includes TSMC, Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries and TFC Communication.

    “A new wave of AI factories requires efficiency and minimal maintenance to achieve the scale required for next-generation workloads,” said C. C. Wei, chairman and CEO of TSMC. “TSMC’s silicon photonics solution combines our strengths in both cutting-edge chip manufacturing and TSMC-SoIC 3D chip stacking to help NVIDIA unlock an AI factory’s ability to scale to a million GPUs and beyond, pushing the boundaries of AI.”

    NVIDIA photonics will drive massive growth for a new wave of state-of-the-art AI factories, alongside pluggable optical transceiver technologies supported by industry leaders including Coherent, Eoptolink, Fabrinet and Innolight.

    Availability
    NVIDIA Quantum-X Photonics InfiniBand switches are expected to be available later this year, with NVIDIA Spectrum-X Photonics Ethernet switches coming in 2026 from leading infrastructure and system vendors.

    Learn more by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Alex Shapiro
    Enterprise Networking
    1-415-608-5044
    ashapiro@nvidia.com

    NVIDIA Sans 9pt font: Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; third parties adopting or offering NVIDIA’s products and technologies; and by integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo and NVIDIA Spectrum-X are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/fff1c2c4-5853-4e6f-a262-e41e95d2301a

    The MIL Network

  • MIL-OSI: Last Year’s Average Tax Refund Was Over $3,000: Here’s Why You Should File Your Return Now

    Source: GlobeNewswire (MIL-OSI)

    MOUNTAIN VIEW, Calif., March 18, 2025 (GLOBE NEWSWIRE) — With the tax return deadline just weeks away on April 15, experts say it’s essential to file as soon as possible. People may be delaying their filing due to confusion around IRS layoffs and uncertainty around if and when new tax proposals will pass. There is no reason to wait to file since these proposals will not impact the 2024 taxes you are filing now. The IRS will maintain essential operations throughout tax season − so why wait to get your refund?

    A Media Snippet accompanying this announcement is available by clicking on this link.

    Last tax season, the average tax refund was over $3,000. In fact, IRS latest filing statistics for week ending March 7, 2025 reports that refunds are up 5.7% over last year with an average refund amount of $3,324. With rising costs and current economic concerns, there’s no reason to delay. File your taxes now to get closer to your refund.

    Whether you would like to file your taxes yourself or have a tax expert do your taxes for you, Intuit TurboTax provides a fast and stress-free tax filing solution for you. Thousands of TurboTax Live experts are available now to do your taxes from start to finish virtually or in person and take them off your plate.

    Plus, if you used TurboTax Full Service last season, you can work with the same expert again this year.

    The tax deadline is rapidly approaching, so no matter how you file, don’t wait and get your taxes done now!

    Learn more at: turbotax.com

    MEDIA CONTACT:
    Lisa Greene-Lewis, lisa_greene-lewis@intuit.com

    The MIL Network

  • MIL-OSI: HP Turbocharges Partner Growth to Drive the Future of Work

    Source: GlobeNewswire (MIL-OSI)

    News Highlights

    • HP unlocks growth with new compensation structure for commercial, retail and distribution partners
    • Accelerates AI adoption with expansion of HP Amplify AI program
    • Exceeds ambitious Amplify Impact targets and doubles sustainable RFP wins

    NASHVILLE, Tenn., March 18, 2025 (GLOBE NEWSWIRE) — Today at the Amplify Conference, HP Inc. (NYSE: HPQ) announced new benefits through its Amplify™ partner program to help partners navigate the evolving demands of the future of work with smarter, more connected experiences. Enhancements include the launch of the Amplify SuperPower Booster, an upgraded compensation structure that rewards portfolio-wide HP sales and supports flexible technology solutions. HP is also expanding the Amplify AI program with new resources and use cases to help partners accelerate adoption. Additionally, the HP Amplify Impact sustainability program surpassed its 2025 enrollment targets, with participating partners seeing an increase in request for proposal (RFP) win rates.

    “In today’s fast-changing technology landscape, HP’s commitment to empowering our partners for success in the future of work is more important than ever,” said Kobi Elbaz, Senior Vice President and General Manager of Global Revenue Operations at HP. “AI-powered solutions are transforming productivity, enabling more fulfilling work experiences, helping customers solve challenges with greater efficiency, creativity, and impact.”

    HP Partner Program Evolves for Long-Term Growth
    HP is dedicated to shaping the future of work by enhancing partner experiences and fostering positive customer outcomes, powered by the strength of HP’s AI-enabled portfolio of products and solutions. To create new opportunities for HP Amplify partners to grow and stay ahead of evolving market needs, HP has introduced the Amplify SuperPower Booster, an enhancement to the compensation structure of the HP Amplify partner program. This initiative rewards both commercial, distribution and retail partners for selling across the HP portfolio.

    In 2023, HP introduced the More for More benefit, a rate multiplier that boosts sales and compensation for qualified partners. Building on More for More, HP is expanding the initiative to include the entire portfolio of HP products and solutions under the new structure. This new initiative will launch on May 1 for commercial partners, with a rollout for retail and distribution partners later this year.

    For partners with specialized businesses, HP will continue to reward the unique value and capabilities their expertise brings to the market.

    Expansion of HP Amplify AI Program Drives AI Adoption and Upskilling
    Today HP announced the expansion of its HP Amplify AI program including new customer use cases, instant access to AI experts and personalized AI pathways and training modules including the HP NVIDIA Technical Sales Strategy AI Workstation MasterClass for advanced AI knowledge.

    In addition, HP introduced a new tailored and condensed training path for partner executives, covering various AI-focused topics and featuring short video use cases that highlight the tangible business benefits of AI. These concise and practical resources enable executives to make informed decisions and facilitate discussions with customers that drive positive outcomes.

    Through collaboration with Alliance Partners and the comprehensive education opportunities offered, HP reaffirms its commitment to lead in AI innovation and partner enablement, delivering effective solutions to customers worldwide. As the demand for AI continues to rise, HP remains at the forefront, empowering businesses to unlock the full potential of AI technologies.

    Enhancing Productivity and Partner Experience
    HP is continuously expanding its suite of AI-powered tools, including chatbots, to create positive experiences for partners. To further improve efficiency and streamline business processes, HP has outlined a two-year roadmap aimed at transforming the HP Partner Portal into a more comprehensive digital platform that leverages AI technologies. As part of this initiative, HP plans to launch a Partner AI Assistant to facilitate faster digital interactions, simplify onboarding, personalize user experiences, and provide real-time support, among other benefits.

    HP Amplify Impact Surpasses Participation Goals and Doubles Sustainable Sales
    Since 1939, HP has been committed to driving meaningful Sustainable Impact. In 2021, the company launched HP Amplify Impact, the IT industry’s first sustainability program for channel partners, which has now exceeded its goal of enrolling 50 percent of Amplify partners by 2025.

    As sustainability becomes a key factor in evolving customer requirements, HP is equipping partners to meet legislative and customer demands. The HP Amplify Impact program offers best-in-class assessments, resources, and training to support partners on their sustainability journey. Participating partners have seen a 70 percent increase win rate, leading to a twofold increase in sustainable sales year over year.

    With partner capabilities expanding, HP has shifted the program’s focus from helping partners develop sustainability plans to addressing customer needs and enhancing business growth with a positive environmental impact. This strategic shift aims to further empower partners to thrive in a competitive market while maintaining a commitment to sustainability.

    More from Amplify Partner Conference
    Follow all the latest news and announcements from the 2025 Amplify Conference, visit the HP Newsroom or follow us on social:

    • Follow @HP on LinkedIn, X and Instagram
    • Follow @Enrique Lores on LinkedIn
    • Follow #HPAmplify across social platforms for the latest updates

    About HP Amplify
    HP Amplify is an industry leading global 1 partner program optimized for dynamic partner growth and to deliver consistent end customer experiences and positive outcomes. It delivers a simplified and easy-to-navigate global structure, which rewards partners based on three pillars: performance, collaboration, and capabilities. Since the launch of HP Amplify, HP has expanded the program with Amplify Data Insights, Amplify Retail, Amplify Online, Amplify Impact and HP Amplify AI

    About HP
    HP Inc. is a global technology leader and creator of solutions that enable people to bring their ideas to life and connect to the things that matter most. Operating in more than 170 countries, HP delivers a wide range of innovative and sustainable devices, services and subscriptions for personal computing, printing, 3D printing, hybrid work, gaming, and more. For more information, please visit http://www.hp.com.

    Resources:

    1 All geographic markets apart from Greater China

    The MIL Network

  • MIL-OSI: Announcement of the granting of Power of Attorney to the Board of Directors

    Source: GlobeNewswire (MIL-OSI)

    Announcement of the granting of Power of Attorney to the Board of Directors

    Pursuant to announcement no 1172 of 31 October 2017 regarding Major Shareholders, The BANK of Greenland hereby announces, that the Board of Directors have received unqualified Powers of Attorney for use on The BANK of Greenland’s ordinary general meeting 26 March 2025, representing 30.27 percent of the company’s share capital, or 544,852 shares.

    26 March 2025 upon termination of the ordinary general meeting, the right of the Board of Director’s to vote in accordance with the Powers of Attorney granted shall cease.

    Best regards
    The BANK of Greenland

    Martin Kviesgaard
    Managing Director

    Contact: +299 34 78 02 / mail: mbk@banken.gl

    Attachment

    The MIL Network

  • MIL-OSI: NVIDIA and Telecom Industry Leaders to Develop AI-Native Wireless Networks for 6G

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — NVIDIA today unveiled partnerships with industry leaders T-Mobile, MITRE, Cisco, ODC, a portfolio company of Cerberus Capital Management, and Booz Allen Hamilton on the research and development of AI-native wireless network hardware, software and architecture for 6G.

    Next-generation wireless networks must be fundamentally integrated with AI to seamlessly connect hundreds of billions of phones, sensors, cameras, robots and autonomous vehicles. AI-native wireless networks will provide enhanced services for billions of users and set new standards in spectral efficiency — the rate at which data can be transmitted over a given bandwidth. They will also offer groundbreaking performance and resource utilization while creating new revenue streams for telecommunications companies.

    “Next-generation wireless networks will be revolutionary, and we have an unprecedented opportunity to ensure AI is woven in from the start,” said Jensen Huang, founder and CEO of NVIDIA. “Working with leaders in the field, we’re building an AI-enhanced 6G network that achieves extreme spectral efficiency.”

    Open Ecosystems Drive Innovation
    Research-driven breakthroughs harnessing the power of AI are necessary to maximize the performance and benefits of AI-native wireless networks. To drive innovation, NVIDIA is collaborating with telco and research leaders to develop an AI-native wireless network stack based on the NVIDIA AI Aerial platform, which provides software-defined radio access networks (RANs) on the NVIDIA accelerated computing platform.

    Developers across the globe are building AI-RAN as a precursor to AI-native 6G wireless networks. AI-RAN is a technology that brings AI and RAN workloads together on one platform and embeds AI into radio signal processing.

    To deliver enhanced spectral efficiency and lower operational complexity and costs, AI will be fully embedded into the network stack’s software and hosted over a unified accelerated infrastructure, capable of running both network and AI workloads. Also at the solution’s core will be end-to-end security and an open architecture to foster rapid innovation.

    T-Mobile and NVIDIA will expand their AI-RAN Innovation Center collaboration announced last year with the goal of providing additional research-based concepts for AI-native 6G network capabilities, working alongside these new industry collaborators.

    “This is an exciting next step to the AI-RAN Innovation Center efforts we began last September at our Capital Markets Day in partnership with NVIDIA,” Mike Sievert, CEO of T-Mobile. “Working with these additional industry leaders on research to natively integrate AI into the network as we begin the journey to 6G will enable the network performance, efficiency and scale to power the next generation of experiences that customers and businesses expect.”

    As the founding research partner, MITRE, a not-for-profit research and development organization, will research, prototype and contribute open, AI-driven services and applications, such as for agentic network orchestration and security, dynamic spectrum sharing and 6G-integrated sensing and communications.

    “MITRE is working with NVIDIA to help make AI-native 6G a reality,” said Mark Peters, president and CEO of MITRE. “By integrating AI into 6G in the beginning, we can solve a wide range of problems, from enhancing service delivery to unlocking required spectrum availability to fuel wireless growth. Through all of our collaborations with NVIDIA, we look forward to creating impact in 6G, AI, simulation, transportation and more.”

    Cisco plans to take a lead position in this collaboration as the provider of mobile core and network technologies and will tap into its existing service provider reach and expertise.

    “With 6G on the horizon, it’s critical for the industry to work together to build AI-native networks for the future,” said Chuck Robbins, chair and CEO of Cisco. “Cisco is at the forefront of developing secure infrastructure technology for AI, and we are proud to work with NVIDIA and the broader ecosystem to create an AI-enhanced network that improves performance, reliability and security for our customers.”

    ODC, a portfolio company of Cerberus Capital Management, L.P., will deliver cutting-edge layer 2 and layer 3 software for distributed and centralized units of virtual RAN as part of the AI-native radio access stack. Tapping into decades of experience in large-scale mobile systems, ODC is pioneering next-generation AI-native 5G open RAN (ORAN), surpassing existing networks and seamlessly paving the way for 6G evolution.

    “The mobile industry has always taken advantage of advances in other technology fields, and today, no technology is more central than AI,” said Shaygan Kheradpir, chairman of the advisory board of ODC. “ODC is at the forefront of developing and deploying AI-native ORAN 2.0 networks, enabling service providers to on-ramp seamlessly from 5G to 6G by taking advantage of the vast AI ecosystem to redefine the future of connectivity.”

    As a leader in AI and cybersecurity to the federal government, Booz Allen will develop AI RAN algorithms and secure the AI-native 6G wireless platform. Its NextG lab will conduct functional, performance integration and security testing to ensure the resiliency and security of the platform against the most sophisticated adversaries. The company will lead field trials for advanced use cases such as autonomy and robotics.

    “The future of wireless communications starts today, and it’s all about AI,” said Horacio Rozanski, chairman and CEO of Booz Allen. “Booz Allen has the technologies to make AI-native 6G networks a reality and revolutionize secure communications for an entirely new generation of intelligent platforms and applications.”

    Expanded Aerial Research Portfolio
    These collaborations build on NVIDIA’s AI-RAN and 6G research ecosystem, supported by advancements in the NVIDIA Aerial™ research portfolio for developing, training, simulating and deploying groundbreaking AI-native wireless innovations.

    New additions to the NVIDIA Aerial Research portfolio, also announced today, include the Aerial Omniverse Digital Twin Service, the Aerial Commercial Test Bed on NVIDIA MGX™, NVIDIA Sionna™ 1.0 — building on the open-source Sionna library, which has nearly 150,000 downloads since its launch in 2022 — and the Sionna Research Kit on the NVIDIA Jetson™ accelerated computing platform. 

    The NVIDIA Aerial Research portfolio serves over 2,000 members through the NVIDIA 6G Developer Program. Industry leaders and more than 150 higher-education and research institutions from the U.S. and around the world are harnessing the platform to accelerate 6G and AI-RAN innovation — paving the way for AI-native wireless networks.

    Learn more by watching the NVIDIA GTC telecom special address and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.

    About NVIDIA
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact:
    Janette Ciborowski
    Enterprise Communications
    NVIDIA Corporation
    +1-734-330-8817
    jciborowski@nvidia.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; the collaboration and partnership between NVIDIA and third parties and the benefits and impact thereof; third parties adopting NVIDIA’s products and technologies and the benefits and impact thereof; next-generation wireless networks being revolutionary; and working with leaders in the field, NVIDIA building an AI-enhanced 6G network that achieves extreme spectral efficiency are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Aerial, NVIDIA Jetson, NVIDIA MGX and Sionna are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/a411c38c-c7b2-4233-a42d-57969c7812b2

    The MIL Network

  • MIL-OSI: General Motors and NVIDIA Collaborate on AI for Next-Generation Vehicle Experience and Manufacturing

    Source: GlobeNewswire (MIL-OSI)

    SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) — GTC — General Motors and NVIDIA today announced they are collaborating on next-generation vehicles, factories and robots using AI, simulation and accelerated computing. 

    The companies will work together to build custom AI systems using NVIDIA accelerated compute platforms, including NVIDIA Omniverse with NVIDIA Cosmos™, to train AI manufacturing models for optimizing GM’s factory planning and robotics. GM will also use NVIDIA DRIVE AGX™ for in-vehicle hardware for future advanced driver-assistance systems and in-cabin enhanced safety driving experiences.

    “GM has enjoyed a longstanding partnership with NVIDIA, leveraging its GPUs across our operations,” said Mary Barra, chair and CEO of General Motors. “AI not only optimizes manufacturing processes and accelerates virtual testing but also helps us build smarter vehicles while empowering our workforce to focus on craftsmanship. By merging technology with human ingenuity, we unlock new levels of innovation in vehicle manufacturing and beyond.”

    “The era of physical AI is here, and together with GM, we’re transforming transportation, from vehicles to the factories where they’re made,” said Jensen Huang, founder and CEO of NVIDIA. “We are thrilled to partner with GM to build AI systems tailored to their vision, craft and know-how.”

    GM has been investing in NVIDIA GPU platforms for training AI models across various areas, including simulation and validation. The companies’ collaboration now expands to transforming automotive plant design and operations.

    GM will use the NVIDIA Omniverse platform to create digital twins of assembly lines, allowing for virtual testing and production simulations to reduce downtime. The effort will include training robotics platforms already in use for operations such as material handling and transport, along with precision welding, to increase manufacturing safety and efficiency.

    GM will also build next-generation vehicles on NVIDIA DRIVE AGX, based on the NVIDIA Blackwell architecture, and running the safety-certified NVIDIA DriveOS™ operating system. Delivering up to 1,000 trillion operations per second of high-performance compute, this in-vehicle computer can speed the development and deployment of safe AVs at scale.

    During the NVIDIA GTC global AI conference, which runs through March 21, NVIDIA will host a fireside chat with GM to discuss the companies’ extended collaboration and delve into how AI is transforming automotive manufacturing and vehicle software development. Register for the session, which will also be available on demand.

    About GM 
    General Motors (NYSE: GM) is driving the future of transportation, leveraging advanced technology to build safer, smarter, and lower emission cars, trucks, and SUVs. GM’s Buick, Cadillac, Chevrolet, and GMC brands offer a broad portfolio of innovative gasoline-powered vehicles and the industry’s widest range of EVs, as we move to an all-electric future. Learn more at GM.com.

    About NVIDIA 
    NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing.

    For further information, contact: 
    Jessica Soares 
    Automotive, NVIDIA 
    jphernandess@nvidia.com 

    Malorie Lucich        
    Technology Communications, GM
    malorie.lucich@gm.com

    Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA’s products, services, and technologies; and the collaboration between NVIDIA and General Motors and the benefits and impact thereof are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

    © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA DriveOS, NVIDIA DRIVE AGX, NVIDIA Omniverse, and NVIDIA Cosmos are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/43231963-dc05-48e7-bde1-aab47041f172

    The MIL Network

  • MIL-OSI: Cequence Security Achieves AWS WAF Ready Designation

    Source: GlobeNewswire (MIL-OSI)

    SANTA CLARA, Calif., March 18, 2025 (GLOBE NEWSWIRE) — Cequence Security, a pioneer in API security and bot management, today announced that it is now an Amazon Web Services (AWS) Web Application Firewall (WAF) Ready Partner. This designation recognizes Cequence’s solution as validated by AWS Partner Network (APN) Solutions Architects and seamlessly integrates with AWS WAF. AWS WAF, available across all AWS Regions, can be deployed directly from the AWS console, empowering organizations to strengthen their security posture with minimal effort.

    Being an AWS WAF Ready Partner differentiates Cequence as an APN member with a product that works with AWS WAF and is generally available for and fully supports AWS customers.

    AWS WAF Ready Partners help customers quickly identify easy-to-deploy solutions that can help detect, mitigate, and analyze some of the most common Internet threats and vulnerabilities.

    Securing web applications has never been more challenging. Fifty-five percent of organizations say protecting their web applications has become more difficult over the past two years, while 93% have faced at least one attack on their web applications and APIs in the past 12 months. This threat landscape is only growing as attackers increasingly harness generative artificial intelligence (AI) to automate and refine their methods.

    Today’s organizations face a range of security challenges:

    • Traditional application attacks: Exploits targeting known vulnerabilities, including OWASP Top 10 risks, along with malware and denial-of-service attacks that disrupt application availability.
    • Unmanaged APIs: Cloud-native architectures and interconnected applications have made APIs prime targets for injection attacks, misconfigurations, and other exploits—often bypassing traditional defenses like WAFs entirely, leaving them even more exposed.
    • Bot and fraud attacks: AI-driven bots are being used at scale for scraping, inventory hoarding, and account fraud, making detection and mitigation increasingly difficult.

    Seamlessly securing APIs and applications
    Cequence’s Unified API Protection (UAP) platform enhances existing WAFs and API gateways by providing proactive security tailored to modern API architectures. Unlike traditional security tools, UAP offers real-time visibility into both managed and unmanaged APIs, detecting vulnerabilities, misconfigurations, and anomalous behavior to prevent threats before they escalate.

    By unifying API discovery, compliance enforcement, and threat protection, UAP helps organizations adopt a proactive security posture, safeguarding critical applications, preventing fraud, ensuring compliance, and seamlessly integrating with existing infrastructure.

    “Achieving the AWS WAF Ready designation strengthens our ability to ensure that AWS customers continue to receive advanced API security solutions,” said Ameya Talwalkar, CEO of Cequence. “While WAFs play a role in security, they are not sufficient to combat today’s sophisticated threats. Malicious bots and API-based attacks can bypass traditional defenses. Cequence provides AWS customers with comprehensive protection, addressing the critical security gaps that WAFs may miss.”

     Additional Resources: 

    About Cequence Security 
    Cequence is a pioneer in API security and bot management, protecting the applications and APIs that organizations depend on from attacks, business logic abuse, and fraud. Our unique Unified API Protection platform unites discovery, compliance, and protection capabilities, providing unmatched real-time security in the face of sophisticated threats. Demonstrating value in minutes rather than days or weeks, Cequence offers a flexible deployment model that requires no app instrumentation or modification. Cequence solutions scale to meet the needs of the largest and most demanding private and public sector organizations, protecting more than 8 billion daily API interactions and 3 billion user accounts. To learn more, visit www.cequence.ai.

    The MIL Network

  • MIL-OSI: Soroka & Associates Representing Victim in Huber Heights Daycare Abuse Case

    Source: GlobeNewswire (MIL-OSI)

    Columbus, OH, March 18, 2025 (GLOBE NEWSWIRE) — Reports of abuse at the Early Beginnings Child Care and Learning Center in Huber Heights, Ohio, are now leading to lawsuits. Soroka & Associates is representing one of the victims of this disturbing pattern of abuse at the child care facility and has filed a lawsuit on behalf of the family. At least four families allege that their children were injured by staff at Early Beginnings Child Care and Learning Center. It appears the facility took no steps to report the abuse and failed to take any remedial action upon learning of said abuse. 

    Roger Soroka spoke about the case and the firm’s commitment to advocating for their client. 

    “As counsel for one of the families impacted by this day care’s negligence, our firm is committed to holding the responsible parties accountable for the harm caused by the day care employee and the day care facility’s failure to act. The severity of the injuries suffered by our infant client is deeply distressing, and the emotional toll on the family is immeasurable. Learning that a child has been hurt while in the care of a trusted institution is a devastating experience that no family should ever have to endure.

    The betrayal felt by our clients is profound. Day care facilities are entrusted with the most vulnerable members of our society, and it is their duty to provide a safe and nurturing environment. When this trust is broken, the consequences are not only physical but also emotional and psychological.

    The safety and well-being of children must always be the top priority, and we will continue to fight for a system where families can trust that their children are protected.”

    Background on allegations against Early Beginnings

    Reports show that one of the victims was just three months old when he suffered a severe brain injury that required emergency surgery after an employee allegedly shook the child. A witness claims to have seen the abuse. Even more shocking, the victim was one of at least four children who suffered similar injuries, including Soroka & Associates’ client. The existence of multiple victims and an ongoing criminal child abuse investigation at Early Beginnings indicate that leaders at the facility knew about the abuse and neglected to act to prevent other children from being injured. As such, the firm has moved forward with a lawsuit.

    Soroka & Associates is a Columbus-based trial firm that fights for victims of negligence, abuse, and serious injuries. The firm has a strong record of success in personal injury and civil litigation and takes on complex cases, including those involving child abuse, wrongful death, and negligent security. Soroka & Associates is committed to holding wrongdoers accountable and securing justice for those they represent.

    Soroka & Associates is now fighting on behalf of one of the young victims injured at Early Beginnings Child Care and Learning Center and their family. If you have any information about allegations of child abuse at this facility or are interested in learning more about Soroka and Associates, visit sorokalegal.com or follow the firm on Facebook, Twitter, YouTube, and Instagram.

    The MIL Network

  • MIL-OSI: Welnax BioClear Reviews: Does This Toenail Fungus Device Really Work or Another Hype?

    Source: GlobeNewswire (MIL-OSI)

    PLAINVIEW, N.Y., March 18, 2025 (GLOBE NEWSWIRE) — Toenail fungus—an issue often dismissed as minor—affects millions worldwide, causing cosmetic embarrassment, pain, and discomfort. While many sufferers turn to creams, ointments, and even prescription medications, results are often slow and inconsistent. However, an innovative device is making waves in nail care: the Welnax BioClear Toenail Fungus Device. With over 8,200 positive reviews, this at-home laser therapy tool claims to eliminate toenail fungus painlessly and effectively.

    For a limited time only, Welnax BioClear is currently being offered at a special discount price for customers here.

    The Hidden Battle Beneath Your Nails

    For those who have never experienced toenail fungus, it might seem like a trivial concern. However, for sufferers, it can be a persistent nightmare. Fungal infections of the toenails, medically known as onychomycosis, often start as a small white or yellow spot under the nail but can quickly spread, leading to thickened, brittle, and discolored nails. In severe cases, the infection can cause pain and an unpleasant odor, making everyday activities like walking or wearing open-toed shoes an ordeal.

    Traditional treatments often come with drawbacks. Topical antifungal creams require long-term application and may not penetrate deep enough to eradicate the fungus. Oral medications can be more effective but carry potential side effects, including liver damage. Laser treatments performed at clinics have shown promise, but they are costly and require multiple sessions. So, could Welnax BioClear truly be the game-changer the world has been waiting for?

    What is Welnax BioClear?

    The Welnax BioClear Toenail Fungus Device is a cutting-edge, non-invasive laser therapy tool designed for at-home use. Unlike traditional treatments that rely on chemicals or pharmaceuticals, Welnax BioClear employs advanced low-level laser therapy (LLLT) to target fungal infections at their root, promoting healthy nail regrowth.

    The device is compact, user-friendly, and portable, making it easy for users to integrate treatment into their daily routine. Designed for all ages, including children (with adult supervision), it requires just 7 minutes per session and claims to deliver visible results in as little as 1 to 2 months.

    (Big Discount) Click Here to Get Welnax BioClear For Up To 70% Off The Original Price

    How Does It Work? The Science Behind the Claims

    Welnax BioClear utilizes low-level laser therapy (LLLT), a technology that has been widely studied for its effectiveness in various medical treatments. The device emits specific wavelengths of light that penetrate the nail bed, directly targeting fungal cells.

    The laser disrupts the fungal cell structure, preventing its growth and reproduction. At the same time, the light energy stimulates blood circulation and cellular regeneration, promoting the growth of healthier, stronger nails. Unlike other treatment methods, Welnax BioClear does not rely on chemicals, ensuring a safe, side-effect-free experience.

    The real curiosity factor here is: How can something so small and non-invasive be powerful enough to eliminate a stubborn fungal infection? Skeptics might raise eyebrows, but thousands of satisfied users suggest the device may be more effective than it seems at first glance.

    User Experience: What Do Customers Say?

    With over 8,200 positive reviews, as shown on its official website, Welnax BioClear has built a loyal customer base. Many users report significant improvements in nail color, thickness, and overall health within weeks. Some even claim their nails have been completely restored within two months of consistent use.

    Here are a few customer testimonials:

    • “I was skeptical at first, but after using Welnax BioClear for about six weeks, my toenail looks almost normal again. No more embarrassment when wearing sandals!” – Jason L., California
    • “I tried so many antifungal creams that did nothing. This little device actually works! Painless, easy to use, and worth every penny.” – Maria T., Florida
    • “My podiatrist recommended expensive laser treatments, but I gave this a shot first. I’m amazed at the results!” – Andrew P., Texas

    However, not every review is glowing. Some users reported slower progress, while others emphasized the need for consistency. Like most treatments, Welnax BioClear is not a magic bullet—it requires patience and regular use for optimal results. Individual results may vary.

    Click Here to Read More Customer Reviews on Welnax BioClear Device Before Purchasing

    Pros and Cons: Weighing the Evidence

    Pros:

    • Non-Invasive & Painless: Unlike surgical treatments or medications with side effects, Welnax BioClear offers a gentle solution.
    • At-Home Convenience: No need for costly clinic visits.
    • Clinically Approved Technology: Low-level laser therapy has been studied and used for medical applications.
    • Fast Treatment Time: Only 7 minutes per session, twice a day.
    • Positive Customer Feedback: Thousands of satisfied users report visible results.
    • No Harsh Chemicals or Drugs: Safe for all ages.

    Cons:

    • Results May Vary: The effectiveness depends on the severity of the fungal infection and consistent use.
    • Requires Patience: While some users see results within weeks, others may need months.
    • Initial Investment: The upfront cost ($99.90 per device) may seem high, though it is more affordable than professional laser treatments.
    • Availability Issues: The device is primarily available online, which may limit accessibility for some users.

    Pricing and Guarantee: Worth the Investment?

    Compared to professional laser treatments, which can cost anywhere from $500 to $1,500, Welnax BioClear is a more budget-friendly option. Pricing options include:

    • 1 Device: $99.90 (Original: $199.90)
    • 2 Devices: $149.90 (Save 62%)
    • 3 Devices: $179.90 (Save 70%)
    • 4 Devices: $199.90 (Save 75%)

    Additionally, Welnax offers a 30-day money-back guarantee, allowing customers to try the product risk-free.

    Is Welnax BioClear the Future of Nail Fungus Treatment?

    Traditional treatments often fall short, so Welnax BioClear presents an intriguing alternative. Its cutting-edge laser therapy, ease of use, and positive customer feedback suggest it could be a breakthrough in nail care. But, as with any treatment, results depend on consistency and individual circumstances.

    The biggest curiosity remains: Could this compact device really be the end of stubborn toenail fungus? Or is it just another fleeting trend? While the testimonials and scientific backing are promising, only time—and more widespread usage—will confirm its ultimate impact.

    One thing is certain: for those struggling with toenail fungus, Welnax BioClear offers a pain-free, convenient, and innovative solution worth exploring. Whether it turns out to be the ultimate solution or just another tool in the fight against fungal infections, it has certainly captured the attention of those seeking healthier, more beautiful nails.

    For more information or to read Welnax BioClear customer testimonials, visit the official website here.

    Media Contact:
    Peter Siddle
    info@hgicounseling.org
    1-888-423-1121

    Disclaimers:

    This article is not intended to provide medical advice or to take the place of medical advice and treatment from your personal physician. Visitors are advised to consult their own doctors or other qualified health professional regarding the treatment of medical conditions. The publisher shall not be held liable or responsible for any misunderstanding or misuse of the information contained on this release or for any loss, damage, or injury caused, or alleged to be caused, directly or indirectly by any treatment, action, or application of any food or food source discussed in this article. The U.S. Food and Drug Administration have not evaluated the statements on this website. The information is not intended to diagnose, treat, cure, or prevent any disease.

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/83ce35f0-90be-4672-9ebc-15c5edd5336c

    The MIL Network

  • MIL-OSI: Annual general meeting of Spar Nord Bank A/S

    Source: GlobeNewswire (MIL-OSI)

    Company announcement no. 07

            

    Annual general meeting of Spar Nord Bank A/S

    Results of the annual general meeting held on 18 March 2025:

    • The report by the Board of Directors, the audited financial statements and the proposal for allocation of profits were approved.
    • The remuneration report for 2024 and the level of the Board’s remuneration in 2025 were approved.
    • The authorisation to the Company to buy treasury shares was approved.
    • Deloitte Statsautoriseret Revisionspartnerselskab was appointed as external auditors to audit the Company’s financial statements and to prepare a report on the Company’s sustainability reporting.
    • The proposals from the Board of Directors to amend the Articles of Association were approved.

    Election of members to the Board of Directors
    Kjeld Johannesen (Nibe), Per Nikolaj Bukh (Risskov), Morten Bach Gaardboe (Slagelse), Henrik Sjøgreen (Gentofte), Lisa Lund Holst (Virum), Michael Lundgaard Thomsen (Aalborg) and Mette Louise Kaagaard (Birkerød) were re-elected as board members.

    In addition, the Board of Directors consists of members elected by the employees: Jannie Skovsen, chairman of Spar Nord Kreds, Gitte Holmgaard, deputy chairman of Spar Nord Kreds, and Rikke Marie Christiansen, HR Partner.

    At the subsequent board meeting, the Board of Directors elected Kjeld
    Johannesen as chairman and Per Nikolaj Bukh as deputy chairman.

    Spar Nord
    Martin Bach
    SVP Corporate Communication

    Attachment

    The MIL Network