Category: KB

  • MIL-OSI: AMD Delivers Leadership AI Performance with AMD Instinct MI325X Accelerators

    Source: GlobeNewswire (MIL-OSI)

    ─ Latest accelerators offer market leading HBM3E memory capacity and are supported by partners and customers including Dell Technologies, HPE, Lenovo, Supermicro and others ─

    ─ AMD Pensando Salina DPU offers 2X generational performance and AMD Pensando Pollara 400 is industry’s first UEC ready NIC─

    SAN FRANCISCO, Oct. 10, 2024 (GLOBE NEWSWIRE) — Today, AMD (NASDAQ: AMD) announced the latest accelerator and networking solutions that will power the next generation of AI infrastructure at scale: AMD Instinct™ MI325X accelerators, the AMD Pensando™ Pollara 400 NIC and the AMD Pensando Salina DPU. AMD Instinct MI325X accelerators set a new standard in performance for Gen AI models and data centers.

    Built on the AMD CDNA™ 3 architecture, AMD Instinct MI325X accelerators are designed for exceptional performance and efficiency for demanding AI tasks spanning foundation model training, fine-tuning and inferencing. Together, these products enable AMD customers and partners to create highly performant and optimized AI solutions at the system, rack and data center level.

    “AMD continues to deliver on our roadmap, offering customers the performance they need and the choice they want, to bring AI infrastructure, at scale, to market faster,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. “With the new AMD Instinct accelerators, EPYC processors and AMD Pensando networking engines, the continued growth of our open software ecosystem, and the ability to tie this all together into optimized AI infrastructure, AMD underscores the critical expertise to build and deploy world class AI solutions.”

    AMD Instinct MI325X Extends Leading AI Performance
    AMD Instinct MI325X accelerators deliver industry-leading memory capacity and bandwidth, with 256GB of HBM3E supporting 6.0TB/s offering 1.8X more capacity and 1.3x more bandwidth than the H2001. The AMD Instinct MI325X also offers 1.3X greater peak theoretical FP16 and FP8 compute performance compared to H2001.

    This leadership memory and compute can provide up to 1.3X the inference performance on Mistral 7B at FP162, 1.2X the inference performance on Llama 3.1 70B at FP83 and 1.4X the inference performance on Mixtral 8x7B at FP16 of the H2004.

    AMD Instinct MI325X accelerators are currently on track for production shipments in Q4 2024 and are expected to have widespread system availability from a broad set of platform providers, including Dell Technologies, Eviden, Gigabyte, Hewlett Packard Enterprise, Lenovo, Supermicro and others starting in Q1 2025.

    Continuing its commitment to an annual roadmap cadence, AMD previewed the next-generation AMD Instinct MI350 series accelerators. Based on AMD CDNA 4 architecture, AMD Instinct MI350 series accelerators are designed to deliver a 35x improvement in inference performance compared to AMD CDNA 3-based accelerators5.

    The AMD Instinct MI350 series will continue to drive memory capacity leadership with up to 288GB of HBM3E memory per accelerator. The AMD Instinct MI350 series accelerators are on track to be available during the second half of 2025.

    AMD Next-Gen AI Networking
    AMD is leveraging the most widely deployed programmable DPU for hyperscalers to power next-gen AI networking. Split into two parts: the front-end, which delivers data and information to an AI cluster, and the backend, which manages data transfer between accelerators and clusters, AI networking is critical to ensuring CPUs and accelerators are utilized efficiently in AI infrastructure.

    To effectively manage these two networks and drive high performance, scalability and efficiency across the entire system, AMD introduced the AMD Pensando™ Salina DPU for the front-end and the AMD Pensando™ Pollara 400, the industry’s first Ultra Ethernet Consortium (UEC) ready AI NIC, for the back-end.

    The AMD Pensando Salina DPU is the third generation of the world’s most performant and programmable DPU, bringing up to 2X the performance, bandwidth and scale compared to the previous generation. Supporting 400G throughput for fast data transfer rates, the AMD Pensando Salina DPU is a critical component in AI front-end network clusters, optimizing performance, efficiency, security and scalability for data-driven AI applications.

    The UEC-ready AMD Pensando Pollara 400, powered by the AMD P4 Programmable engine, is the industry’s first UEC-ready AI NIC. It supports the next-gen RDMA software and is backed by an open ecosystem of networking. The AMD Pensando Pollara 400 is critical for providing leadership performance, scalability and efficiency of accelerator-to-accelerator communication in back-end networks.

    Both the AMD Pensando Salina DPU and AMD Pensando Pollara 400 are sampling with customers in Q4’24 and are on track for availability in the first half of 2025.

    AMD AI Software Delivering New Capabilities for Generative AI
    AMD continues its investment in driving software capabilities and the open ecosystem to deliver powerful new features and capabilities in the AMD ROCm™ open software stack.

    Within the open software community, AMD is driving support for AMD compute engines in the most widely used AI frameworks, libraries and models including PyTorch, Triton, Hugging Face and many others. This work translates to out-of-the-box performance and support with AMD Instinct accelerators on popular generative AI models like Stable Diffusion 3, Meta Llama 3, 3.1 and 3.2 and more than one million models at Hugging Face.

    Beyond the community, AMD continues to advance its ROCm open software stack, bringing the latest features to support leading training and inference on Generative AI workloads. ROCm 6.2 now includes support for critical AI features like FP8 datatype, Flash Attention 3, Kernel Fusion and more. With these new additions, ROCm 6.2, compared to ROCm 6.0, provides up to a 2.4X performance improvement on inference6 and 1.8X on training for a variety of LLMs7.

    Supporting Resources

    • Follow AMD on LinkedIn
    • Follow AMD on Twitter
    • Read more about AMD Next Generation AI Networking here
    • Read more about AMD Instinct Accelerators here
    • Visit the AMD Advancing AI: 2024 event page

    About AMD
    For more than 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Billions of people, leading Fortune 500 businesses, and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogLinkedIn, and X pages.

    CAUTIONARY STATEMENT

    This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including the AMD Instinct™ MI325X accelerators; AMD Pensando™ Salina DPU; AMD Pensando Pollara 400; continued growth of AMD’s open software ecosystem; AMD Instinct MI350 series accelerators, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as “would,” “may,” “expects,” “believes,” “plans,” “intends,” “projects” and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD’s control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; Nvidia’s dominance in the graphics processing unit market and its aggressive business practices; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; competitive markets in which AMD’s products are sold; economic and market uncertainty; quarterly and seasonal sales patterns; AMD’s ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD’s products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD’s ability to introduce products on a timely basis with expected features and performance levels; AMD’s ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD’s reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD’s reliance on Microsoft and other software vendors’ support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD’s supply chain; AMD’s ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or investments on AMD’s business and AMD’s ability to integrate acquired businesses;  impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

    AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, Pensando, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

    ________________________________

    1MI325-002 -Calculations conducted by AMD Performance Labs as of May 28th, 2024 for the AMD Instinct™ MI325X GPU resulted in 1307.4 TFLOPS peak theoretical half precision (FP16), 1307.4 TFLOPS peak theoretical Bfloat16 format precision (BF16), 2614.9 TFLOPS peak theoretical 8-bit precision (FP8), 2614.9 TOPs INT8 floating-point performance. Actual performance will vary based on final specifications and system configuration.
    Published results on Nvidia H200 SXM (141GB) GPU: 989.4 TFLOPS peak theoretical half precision tensor (FP16 Tensor), 989.4 TFLOPS peak theoretical Bfloat16 tensor format precision (BF16 Tensor), 1,978.9 TFLOPS peak theoretical 8-bit precision (FP8), 1,978.9 TOPs peak theoretical INT8 floating-point performance. BFLOAT16 Tensor Core, FP16 Tensor Core, FP8 Tensor Core and INT8 Tensor Core performance were published by Nvidia using sparsity; for the purposes of comparison, AMD converted these numbers to non-sparsity/dense by dividing by 2, and these numbers appear above. 
    Nvidia H200 source:  https://nvdam.widen.net/s/nb5zzzsjdf/hpc-datasheet-sc23-h200-datasheet-3002446 and https://www.anandtech.com/show/21136/nvidia-at-sc23-h200-accelerator-with-hbm3e-and-jupiter-supercomputer-for-2024
    Note: Nvidia H200 GPUs have the same published FLOPs performance as H100 products https://resources.nvidia.com/en-us-tensor-core/.

    2 Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for Mistral-7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 128 tokens for the following configurations of AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator.

    1x MI325X at 1000W with vLLM performance: 0.637 sec (latency in seconds)
    Vs.
    1x H200 at 700W with TensorRT-LLM: 0.811 sec (latency in seconds)

    Configurations:
    AMD Instinct™ MI325X reference platform:
    1x AMD Ryzen™ 9 7950X 16-Core Processor CPU, 1x AMD Instinct MI325X (256GiB, 1000W) GPU, Ubuntu® 22.04, and ROCm™ 6.3 pre-release
    Vs
    NVIDIA H200 HGX platform:
    Supermicro SuperServer with 2x Intel Xeon® Platinum 8468 Processors, 8x Nvidia H200 (140GB, 700W) GPUs [only 1 GPU was used in this test], Ubuntu 22.04), CUDA 12.6 Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations. MI325-005

    3 MI325-006: Based on testing completed on 9/28/2024 by AMD performance lab measuring overall latency for LLaMA 3.1-70B model using FP8 datatype. Test was performed using input length of 2048 tokens and an output length of 2048 tokens for the following configurations of AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator.

    1x MI325X at 1000W with vLLM performance: 48.025 sec (latency in seconds)
    Vs.
    1x H200 at 700W with TensorRT-LLM: 62.688 sec (latency in seconds)

    Configurations:
    AMD Instinct™ MI325X reference platform:
    1x AMD Ryzen™ 9 7950X 16-Core Processor CPU, 1x AMD Instinct MI325X (256GiB, 1000W) GPU, Ubuntu® 22.04, and ROCm™ 6.3 pre-release
    Vs
    NVIDIA H200 HGX platform:
    Supermicro SuperServer with 2x Intel Xeon® Platinum 8468 Processors, 8x Nvidia H200 (140GB, 700W) GPUs, Ubuntu 22.04), CUDA 12.6

    Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations.

    4 MI325-004: Based on testing completed on 9/28/2024 by AMD performance lab measuring text generated throughput for Mixtral-8x7B model using FP16 datatype. Test was performed using input length of 128 tokens and an output length of 4096 tokens for the following configurations of AMD Instinct™ MI325X GPU accelerator and NVIDIA H200 SXM GPU accelerator.

    1x MI325X at 1000W with vLLM performance: 4598 (Output tokens / sec)
    Vs.
    1x H200 at 700W with TensorRT-LLM: 2700.7 (Output tokens / sec)

    Configurations:
    AMD Instinct™ MI325X reference platform:
    1x AMD Ryzen™ 9 7950X CPU, 1x AMD Instinct MI325X (256GiB, 1000W) GPU, Ubuntu® 22.04, and ROCm™ 6.3 pre-release
    Vs
    NVIDIA H200 HGX platform:
    Supermicro SuperServer with 2x Intel Xeon® Platinum 8468 Processors, 8x Nvidia H200 (140GB, 700W) GPUs [only 1 GPU was used in this test], Ubuntu 22.04) CUDA® 12.6

    Server manufacturers may vary configurations, yielding different results. Performance may vary based on use of latest drivers and optimizations.

    5 CDNA4-03: Inference performance projections as of May 31, 2024 using engineering estimates based on the design of a future AMD CDNA 4-based Instinct MI350 Series accelerator as proxy for projected AMD CDNA™ 4 performance. A 1.8T GPT MoE model was evaluated assuming a token-to-token latency = 70ms real time, first token latency = 5s, input sequence length = 8k, output sequence length = 256, assuming a 4x 8-mode MI350 series proxy (CDNA4) vs. 8x MI300X per GPU performance comparison.. Actual performance will vary based on factors including but not limited to final specifications of production silicon, system configuration and inference model and size used.

    6 MI300-62: Testing conducted by internal AMD Performance Labs as of September 29, 2024 inference performance comparison between ROCm 6.2 software and ROCm 6.0 software on the systems with 8 AMD Instinct™ MI300X GPUs coupled with Llama 3.1-8B, Llama 3.1-70B, Mixtral-8x7B, Mixtral-8x22B, and Qwen 72B models.

    ROCm 6.2 with vLLM 0.5.5 performance was measured against the performance with ROCm 6.0 with vLLM 0.3.3, and tests were performed across batch sizes of 1 to 256 and sequence lengths of 128 to 2048.

    Configurations:
    1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, , ROCm 6.2.0-00, vLLM 0.5.5, PyTorch 2.4.0, Ubuntu® 22.04 LTS with Linux kernel 5.15.0-119-generic.
    vs.
    1P AMD EPYC 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5TiB 24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, ROCm 6.0.0-00, vLLM 0.3.3, PyTorch 2.1.1, Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic.

    Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including but not limited to different versions of configurations, vLLM, and drivers.

    7 MI300-61: Measurements conducted by AMD AI Product Management team on AMD Instinct™ MI300X GPU for comparing large language model (LLM) performance with optimization methodologies enabled and disabled as of 9/28/2024 on Llama 3.1-70B and Llama 3.1-405B and vLLM 0.5.5.

    System Configurations:
    – AMD EPYC 9654 96-Core Processor, 8 x AMD MI300X, ROCm™ 6.1, Linux® 7ee7e017abe3 5.15.0-116-generic #126-Ubuntu® SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux, Frequency boost: enabled.

    Performance may vary on factors including but not limited to different versions of configurations, vLLM, and drivers.

    Contact:
    Aaron Grabein
     AMD Communications
    +1 737-256-9518
    aaron.grabein@amd.com

    Mitch Haws
    AMD Investor Relations
    +1 512-944-0790 
    mitch.haws@amd.com

    The MIL Network

  • MIL-OSI: AMD Launches New Ryzen™ AI PRO 300 Series Processors to Power Next Generation of Commercial PCs

    Source: GlobeNewswire (MIL-OSI)

    – New processors deliver unprecedented AI compute capabilities1and multi-day battery life2, enabling incredible productivity for business users –

    – AMD continues to expand commercial portfolio; more than 100 Ryzen AI PRO PCs on-track to launch through 2025 –

    SAN FRANCISCO, Oct. 10, 2024 (GLOBE NEWSWIRE) — Today, AMD (NASDAQ: AMD) announced its third generation commercial AI mobile processors, designed specifically to transform business productivity with Copilot+ features including live captioning and language translation in conference calls and advanced AI image generators. The new Ryzen AI PRO 300 Series processors deliver industry-leading AI compute3, with up to three times the AI performance than the previous generation4, and offer uncompromising performance for everyday workloads. Enabled with AMD PRO Technologies, the Ryzen AI PRO 300 Series processors offer world-class security and manageability features designed to streamline IT operations and ensure exceptional ROI for businesses.

    Ryzen AI PRO 300 Series processors feature new AMD “Zen 5” architecture, delivering outstanding CPU performance, and are the world’s best line up of commercial processors for Copilot+ enterprise PCs5. Laptops equipped with Ryzen AI PRO 300 Series processors are designed to tackle business’ toughest workloads, with the top-of-stack Ryzen AI 9 HX PRO 375 offering up to 40% higher performance6 and up to 14% faster productivity performance7 compared to Intel’s Core Ultra 7 165U. With the addition of XDNA™ 2 architecture powering the integrated NPU, AMD Ryzen AI PRO 300 Series processors offer a cutting-edge 50+ NPU TOPS (Trillions of Operations Per Second) of AI processing power, exceeding Microsoft’s Copilot+ AI PC requirements89 and delivering exceptional AI compute and productivity capabilities for the modern business. Built on a 4nm process and with innovative power management, the new processors deliver extended battery life ideal for sustained performance and productivity on the go.

    “Enterprises are increasingly demanding more compute power and efficiency to drive their everyday tasks and most taxing workloads. We are excited to add the Ryzen AI PRO 300 Series, the most powerful AI processor built for business PCs10, to our portfolio of mobile processors,” said Jack Huynh, senior vice president and general manager, Computing and Graphics Group at AMD. “Our third generation AI-enabled processors for business PCs deliver unprecedented AI processing capabilities with incredible battery life and seamless compatibility for the applications users depend on.”

    AMD Ryzen AI PRO 300 Series Mobile Processors

    Model Cores/Threads Boost11/ Base Frequency Total Cache Graphics Model
    AMD
    cTDP TOPS
    AMD Ryzen™ AI 9 HX PRO 375 12C/24T Up to 5.1GHz/
    2GHz
    36MB Radeon™ 890M Graphics 15-54W Up to 55
    AMD Ryzen™ AI 9 HX PRO 370 12C/24T Up to 5.1GHz/
    2GHz
    36MB Radeon™ 890M Graphics 15-54W Up to 50
    AMD Ryzen™ AI 7 PRO 360 8C/16T Up to 5GHz/
    2GHz
    24MB AMD Radeon™ 880M Graphics 15-54W Up to 50


    AMD Continues to Expand Commercial OEM Ecosystem

    OEM partners continue to expand their commercial offerings with new PCs powered by Ryzen AI PRO 300 Series processors, delivering well-rounded performance and compatibility to their business customers. With industry leading TOPS, the next generation of Ryzen processor-powered commercial PCs are set to expand the possibilities of local AI processing with Microsoft Copilot+. OEM systems powered by Ryzen AI PRO 300 Series are expected to be on shelf starting later this year.

    “Microsoft’s partnership with AMD and the integration of Ryzen AI PRO processors into Copilot+ PCs demonstrate our joint focus on delivering impactful AI-driven experiences for our customers. The Ryzen AI PRO’s performance, combined with the latest features in Windows 11, enhances productivity, efficiency, and security,” said Pavan Davuluri, corporate vice president, Windows+ Devices, Microsoft. “Features like Improved Windows Search, Recall, and Click to Do make PCs more intuitive and responsive. Security enhancements, including the Microsoft Pluton security processor and Windows Hello Enhanced Sign-in Security, help safeguard customer data with advanced protection. We’re proud of our strong history of collaboration with AMD and are thrilled to bring these innovations to market.”

    “In today’s AI-powered era of computing, HP is dedicated to delivering powerful innovation and performance that revolutionizes the way people work,” said Alex Cho, president of Personal Systems, HP. “With the HP EliteBook X Next-Gen AI PC, we are empowering modern leaders to push boundaries without compromising power or performance. We are proud to expand our AI PC lineup powered by AMD, providing our commercial customers with a truly personalized experience.”

    “Lenovo’s partnership with AMD continues to drive AI PC innovation and deliver supreme performance for our business customers. Our recently announced ThinkPad T14s Gen 6 AMD, powered by the latest AMD Ryzen AI PRO 300 Series processors, showcases the strength of our collaboration,” said Luca Rossi, president, Lenovo Intelligent Devices Group. “This device offers outstanding AI computing power, enhanced security, and exceptional battery life, providing professionals with the tools they need to maximize productivity and efficiency. Together with AMD, we are transforming the business landscape by delivering smarter, AI-driven solutions that empower users to achieve more.”

    New PRO Technologies Features Build Upon Leadership Security and Management Features

    In addition to AMD Secure Processor12, AMD Shadow Stack and AMD Platform Secure Boot, AMD has expanded its PRO Technologies lineup with new security and manageability features. Processors equipped with PRO Technologies will now come standard with Cloud Bare Metal Recovery, allowing IT teams to seamlessly recover systems via the cloud ensuring smooth and continuous operations; Supply Chain Security (AMD Device Identity), a new supply chain security function, enabling traceability across the supply chain; and Watch Dog Timer, building on existing resiliency support with additional detection and recovery processes.

    Additional AI-based malware detection is available via PRO Technologies with select ISV partners. These new security features leverage the integrated NPU to run AI-based security workloads without impacting day-to-day performance.

    Supporting Resources

    About AMD
    For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogLinkedIn and X pages.

    Cautionary Statement
    This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including the AMD Ryzen™ AI PRO 300 Series mobile processors, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as “would,” “may,” “expects,” “believes,” “plans,” “intends,” “projects” and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD’s control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; Nvidia’s dominance in the graphics processing unit market and its aggressive business practices; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; competitive markets in which AMD’s products are sold; economic and market uncertainty; quarterly and seasonal sales patterns; AMD’s ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD’s products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD’s ability to introduce products on a timely basis with expected features and performance levels; AMD’s ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD’s reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD’s reliance on Microsoft and other software vendors’ support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD’s supply chain; AMD’s ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or investments on AMD’s business and AMD’s ability to integrate acquired businesses; impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

    © 2024 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Radeon, RDNA, Ryzen, XDNA and combinations thereof are trademarks of Advanced Micro Devices, Inc. Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with the system manufacturer for specific features. No technology or product can be completely secure.

    The information contained herein is for informational purposes only and is subject to change without notice. Timelines, roadmaps, and/or product release dates shown in this Press Release are plans only and subject to change.


    1 As of May 2023, AMD has the first available dedicated AI engine on an x86 Windows processor, where ‘dedicated AI engine’ is defined as an AI engine that has no function other than to process AI inference models and is part of the x86 processor die. For detailed information, please check: https://www.amd.com/en/technologies/xdna.html. PHX-3a.
    2 All battery life claims are approximate. Actual battery life will vary based on several factors, including, but not limited to: product configuration and usage, software, operating conditions, wireless functionality, power management settings, screen brightness and other factors. The maximum capacity of the battery will naturally decrease with time and use. AMD has not independently tested or verified the battery life claim. GD-168.
    3 Based on AMD product specifications and competitive products announced as of Oct 2024. AMD Ryzen™ AI PRO 300 Series processors’ NPU offers up to 55 peak TOPS. This is the most TOPS offered on any system found in enterprise today. AI PC is defined as a laptop PC with a processor that includes a neural processing unit (NPU). STXP-06.
    4 Based on TOPS specification of AMD Ryzen™ AI 300 Series processors with 50 TOPS compared to an AMD Ryzen 8040 Series processors with 16 TOPS as of June 2024. STX-01. 
    5 Based on product specifications and competitive products announced as of Oct 2024 and testing as of Sept 2024 by AMD performance labs using the following systems: HP EliteBook X G1a with AMD Ryzen AI 9 HX PRO 375 processor @23W, Radeon 880M graphics, 32GB of RAM, 512GB SSD, VBS=ON, Windows 11 PRO; Dell Latitude 7450 with Intel Core Ultra 7 165U processor @15W (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 32GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Professional; Dell Latitude 7450 with Intel Core Ultra 7 165H processor @28W (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 16GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Pro. All systems were tested in Best Performance Mode. AI PC is defined as a laptop PC with a processor that includes a neural processing unit (NPU). STXP-04.
    6 Testing as of Sept 2024 by AMD performance labs on an HP EliteBook X G1a (14in) (40W) with AMD Ryzen AI 9 HX PRO 375 processor, Radeon™ 890M graphics, 32GB of RAM, 512GB SSD, VBS=ON, Windows 11 Pro vs. a Dell Latitude 7450 with an Intel Core Ultra 7 165H processor (vPro enabled), Intel Arc Graphics, VBS=ON, 16GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Pro in the application(s) (Best Performance Mode): Cinebench R24 nT. Laptop manufactures may vary configurations yielding different results. STXP-12.
    7  Testing as of Sept 2024 by AMD performance labs using the following systems: (1) HP EliteBook X G1a with AMD Ryzen AI 9 HX PRO 375 processor (@40W), Radeon™ 890M graphics, 32GB of RAM, 512GB SSD, VBS=ON, Windows 11 Pro; (2) Dell Latitude 7450 with Intel Core Ultra 7 165U processor (@15W) (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 32GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Professional; and (3) Dell Latitude 7450 with Intel Core Ultra 7 165H processor (@28W) (vPro enabled), Intel Integrated, VBS=ON, 16GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Pro. Tested applications (in Balanced Mode) include: Procyon Office Productivity, Procyon Office Productivity Excel, Procyon Office Productivity Outlook, Procyon Office Productivity Power Point, Procyon Office Productivity Word, Composite Geomean Score. Laptop manufactures may vary configurations yielding different results. STXP-18.
    8 Based on Microsoft Copilot+ requirements of minimum 40 TOPS using AMD product specifications and competitive products announced as of Oct 2024. Microsoft requirements found here – https://support.microsoft.com/en-us/topic/copilot-pc-hardware-requirements-35782169-6eab-4d63-a5c5-c498c3037364. STXP-05.
    9 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.
    10 Testing as of Sept 2024 by AMD performance labs using the following benchmarks: Blender, Cinebench R24, Geekbench 6.3, and Passmark 11, systems: HP EliteBook X G1a with AMD Ryzen AI 9 HX PRO 375 processor @54W, Radeon 880M graphics, 32GB of RAM, 512GB SSD; Lenovo ThinkPad T14s Gen 6 with AMD Ryzen™ AI 7 PRO 360 processor @22W, Radeon™ 880M graphics, 32GB RAM, 1TB SSD; Dell Latitude 7450 with Intel Core Ultra 7 165U processor @15W (vPro enabled), Intel Iris Xe Graphics, 32GB RAM, 512GB NVMe SSD; Dell Latitude 7450 with Intel Core Ultra 7 165H processor @28W (vPro enabled), Intel Iris Xe Graphics, 16GB RAM, 512GB NVMe SSD,. All systems Windows 11 Pro, VBS=ON, and tested in Best Performance Mode. PassMark is a registered trademark of PassMark Software Pty Ltd. AI PC is defined as a laptop PC with a processor that includes a neural processing unit (NPU). STXP-07.
    11 Boost Clock Frequency is the maximum frequency achievable on the CPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-150
    12 The AMD Secure Processor is a dedicated on-chip security processor integrated within each system-on-a-chip (SoC) and ASIC (Application Specific Integrated Circuit) designed by AMD. It enables secure boot with root of trust anchored in hardware, initializes the SoC through a secure boot flow, and establishes an isolated Trusted Execution Environment. GD-72.

    Contact:
    Stacy MacDiarmid
    AMD Communications
    +1 512-658-2265
    Stacy.MacDiarmid@amd.com

    Mitch Haws
    AMD Investor Relations
    +1 512-944-0790
    Mitch.Haws@amd.com

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/c67477ae-0d96-4936-91ba-cd836bfa321e

    The MIL Network

  • MIL-OSI: AMD Unveils Leadership AI Solutions at Advancing AI 2024

    Source: GlobeNewswire (MIL-OSI)

    AMD launches 5thGen AMD EPYC processors, AMD Instinct MI325X accelerators, next gen networking solutions and AMD Ryzen AI PRO processors powering enterprise AI at scale —

    — Dell, Google Cloud, HPE, Lenovo, Meta, Microsoft, Oracle Cloud Infrastructure, Supermicro and AI leaders Databricks, Essential AI, Fireworks AI, Luma AI and Reka AI joined AMD to showcase expanding AMD AI solutions for enterprises and end users —

    — Technical leaders from Cohere, Google DeepMind, Meta, Microsoft, OpenAI and more discussed how they are using AMD ROCm software to deploy models and applications on AMD Instinct accelerators

    SAN FRANCISCO, Oct. 10, 2024 (GLOBE NEWSWIRE) — AMD (NASDAQ: AMD) today launched the latest high performance computing solutions defining the AI computing era, including 5th Gen AMD EPYC™ server CPUs, AMD Instinct™ MI325X accelerators, AMD Pensando™ Salina DPUs, AMD Pensando Pollara 400 NICs and AMD Ryzen™ AI PRO 300 series processors for enterprise AI PCs. AMD and its partners also showcased how they are deploying AMD AI solutions at scale, the continued ecosystem growth of AMD ROCm™ open source AI software, and a broad portfolio of new solutions based on AMD Instinct accelerators, EPYC CPUs and Ryzen PRO CPUs.

    “The data center and AI represent significant growth opportunities for AMD, and we are building strong momentum for our EPYC and AMD Instinct processors across a growing set of customers,” said AMD Chair and CEO Dr. Lisa Su. “With our new EPYC CPUs, AMD Instinct GPUs and Pensando DPUs we are delivering leadership compute to power our customers’ most important and demanding workloads. Looking ahead, we see the data center AI accelerator market growing to $500 billion by 2028. We are committed to delivering open innovation at scale through our expanded silicon, software, network and cluster-level solutions.”

    Defining the Data Center in the AI Era
    AMD announced a broad portfolio of data center solutions for AI, enterprise, cloud and mixed workloads:

    • New AMD EPYC 9005 Series processors deliver record-breaking performance1 to enable optimized compute solutions for diverse data center needs. Built on the latest “Zen 5” architecture, the lineup offers up to 192 cores and will be available in a wide range of platforms from leading OEMs and ODMs starting today.
    • AMD continues executing its annual cadence of AI accelerators with the launch of AMD Instinct MI325X, delivering leadership performance and memory capabilities for the most demanding AI workloads. AMD also shared new details on next-gen AMD Instinct MI350 series accelerators expected to launch in the second half of 2025, extending AMD Instinct leadership memory capacity and generative AI performance. AMD has made significant progress developing the AMD Instinct MI400 Series accelerators based on the AMD CDNA Next architecture, planned to be available in 2026.
    • AMD has continuously improved its AMD ROCm software stack, doubling AMD Instinct MI300X accelerator inferencing and training performance2 across a wide range of the most popular AI models. Today, over one million models run seamlessly out of the box on AMD Instinct, triple the number available when MI300X launched, with day-zero support for the most widely used models.
    • AMD also expanded its high performance networking portfolio to address evolving system networking requirements for AI infrastructure, maximizing CPU and GPU performance to deliver performance, scalability and efficiency across the entire system. The AMD Pensando Salina DPU delivers a high performance front-end network for AI systems, while the AMD Pensando Pollara 400, the first Ultra Ethernet Consortium ready NIC, reduces the complexity of performance tuning and helps improve time to production.

    AMD partners detailed how they leverage AMD data center solutions to drive leadership generative AI capabilities, deliver cloud infrastructure used by millions of people daily and power on-prem and hybrid data centers for leading enterprises:

    • Since launching in December 2023, AMD Instinct MI300X accelerators have been deployed at scale by leading cloud, OEM and ODM partners and are serving millions of users daily on popular AI models, including OpenAI’s ChatGPT, Meta Llama and over one million open source models on the Hugging Face platform.
    • Google highlighted how AMD EPYC processors power a wide range of instances for AI, high performance, general purpose and confidential computing, including their AI Hypercomputer, a supercomputing architecture designed to maximize AI ROI. Google also announced EPYC 9005 Series-based VMs will be available in early 2025.
    • Oracle Cloud Infrastructure shared how it leverages AMD EPYC CPUs, AMD Instinct accelerators and Pensando DPUs to deliver fast, energy efficient compute and networking infrastructure for customers like Uber, Red Bull Powertrains, PayPal and Fireworks AI. OCI announced the new E6 compute platform powered by EPYC 9005 processors.
    • Databricks highlighted how its models and workflows run seamlessly on AMD Instinct and ROCm and disclosed that their testing shows the large memory capacity and compute capabilities of AMD Instinct MI300X GPUs help deliver an over 50% increase in performance on Llama and Databricks proprietary models.
    • Microsoft CEO Satya Nadella highlighted Microsoft’s longstanding collaboration and co-innovation with AMD across its product offerings and infrastructure, with MI300X delivering strong performance on Microsoft Azure and GPT workloads. Nadella and Su also discussed the companies’ deep partnership on the AMD Instinct roadmap and how Microsoft is planning to leverage future generations of AMD Instinct accelerators including MI350 series and beyond to deliver leadership performance-per-dollar-per-watt for AI applications.
    • Meta detailed how AMD EPYC CPUs and AMD Instinct accelerators power its compute infrastructure across AI deployments and services, with MI300X serving all live traffic on Llama 405B. Meta is also partnering with AMD to optimize AI performance from silicon, systems, and networking to software and applications.
    • Leading OEMs Dell, HPE, Lenovo and Supermicro are expanding on their highly performant, energy efficient AMD EPYC processor-based lineups with new platforms designed to modernize data centers for the AI era.

    Expanding an Open AI Ecosystem
    AMD continues to invest in the open AI ecosystem and expand the AMD ROCm open source software stack with new features, tools, optimizations and support to help developers extract the ultimate performance from AMD Instinct accelerators and deliver out-of-the-box support for today’s leading AI models. Leaders from Essential AI, Fireworks AI, Luma AI and Reka AI discussed how they are optimizing models across AMD hardware and software.

    AMD also hosted a developer event joined by technical leaders from across the AI developer ecosystem, including Microsoft, OpenAI, Meta, Cohere, xAI and more. Luminary presentations hosted by the inventors of popular AI programming languages, models and frameworks critical to the AI transformation taking place, such as Triton, TensorFlow, vLLM and Paged Attention, FastChat and more, shared how developers are unlocking AI performance optimizations through vendor agnostic programming languages, accelerating models on AMD Instinct accelerators, and highlighted the ease of use porting to ROCm software and how the ecosystem is benefiting from an open-source approach.

    Enabling Enterprise Productivity with AI PCs
    AMD launched AMD Ryzen AI PRO 300 Series processors, powering the first Microsoft Copilot+ laptops enabled for the enterprise3. The Ryzen AI PRO 300 Series processor lineup extends AMD leadership in performance and battery life with the addition of enterprise-grade security and manageability features for business users.

    • The Ryzen AI PRO 300 Series processors, featuring the new AMD “Zen 5” and AMD XDNA™ 2 architectures, are the world’s most advanced commercial processors4, offering best in class performance for unmatched productivity5 and an industry leading 55 NPU TOPS6 of AI performance with the Ryzen AI 9 HX PRO 375 processor to process AI tasks locally on Ryzen AI PRO laptops.
    • Microsoft highlighted how Windows 11 Copilot+ and the Ryzen AI PRO 300 lineup are ready for next generation AI experiences, including new productivity and security features.
    • OEM partners including HP and Lenovo are expanding their commercial offerings with new PCs powered by Ryzen AI PRO 300 Series processors, with more than 100 platforms expected to come to market through 2025.

    Supporting Resources

    • Watch the AMD Advancing AI keynote and see the news here
    • Follow AMD on X
    • Connect with AMD on LinkedIn

    About AMD
    For more than 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Billions of people, leading Fortune 500 businesses, and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogLinkedIn, and X pages.

    Cautionary Statement
    This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products; AMD’s expected data center and AI growth opportunities; the ability of AMD to build momentum for AMD EPYC™ and AMD Instinct™ processors across its customers; the ability of AMD to deliver leadership compute to power to its customers workloads; the anticipated growth of the data center AI accelerator market by 2028; and AMD’s commitment to delivering open innovation at scale, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as “would,” “may,” “expects,” “believes,” “plans,” “intends,” “projects” and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD’s control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; Nvidia’s dominance in the graphics processing unit market and its aggressive business practices; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; competitive markets in which AMD’s products are sold; economic and market uncertainty; quarterly and seasonal sales patterns; AMD’s ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD’s products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD’s ability to introduce products on a timely basis with expected features and performance levels; AMD’s ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD’s reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD’s reliance on Microsoft and other software vendors’ support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD’s supply chain; AMD’s ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or investments on AMD’s business and AMD’s ability to integrate acquired businesses;  impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

    AMD, the AMD Arrow logo, EPYC, AMD CDNA, AMD Instinct, Pensando, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

    __________________________________

    EPYC-022F: For a complete list of world records see: http://amd.com/worldrecords.
    2 Testing conducted by internal AMD Performance Labs as of September 29, 2024 inference performance comparison between ROCm 6.2 software and ROCm 6.0 software on the systems with 8 AMD Instinct™ MI300X GPUs coupled with Llama 3.1-8B, Llama 3.1-70B, Mixtral-8x7B, Mixtral-8x22B, and Qwen 72B models.
    ROCm 6.2 with vLLM 0.5.5 performance was measured against the performance with ROCm 6.0 with vLLM 0.3.3, and tests were performed across batch sizes of 1 to 256 and sequence lengths of 128 to 2048.
    Configurations:
    1P AMD EPYC™ 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5 TiB (24 DIMMs, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, , ROCm 6.2.0-00, vLLM 0.5.5, PyTorch 2.4.0, Ubuntu® 22.04 LTS with Linux kernel 5.15.0-119-generic.
    vs.
    1P AMD EPYC 9534 CPU server with 8x AMD Instinct™ MI300X (192GB, 750W) GPUs, Supermicro AS-8125GS-TNMR2, NPS1 (1 NUMA per socket), 1.5TiB 24 DIMMS, 4800 mts memory, 64 GiB/DIMM), 4x 3.49TB Micron 7450 storage, BIOS version: 1.8, ROCm 6.0.0-00, vLLM 0.3.3, PyTorch 2.1.1, Ubuntu 22.04 LTS with Linux kernel 5.15.0-119-generic. MI300-62
    Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including but not limited to different versions of configurations, vLLM, and drivers.
    3 Based on Microsoft Copilot+ requirements of minimum 40 TOPS using AMD product specifications and competitive products announced as of Oct 2024. Microsoft requirements found here – https://support.microsoft.com/en-us/topic/copilot-pc-hardware-requirements-35782169-6eab-4d63-a5c5-c498c3037364. STXP-05.
    4 Based on a small node size for an x86 platform and cutting-edge, interconnected technologies, as of September 2024. GD-203b
    5 Testing as of Sept 2024 by AMD performance labs using the following systems: HP EliteBook X G1a with AMD Ryzen AI 9 HX PRO 375 processor @40W, Radeon™ 890M graphics, 32GB of RAM, 512GB SSD, VBS=ON, Windows 11 Pro; Lenovo ThinkPad T14s Gen 6 with AMD Ryzen™ AI 7 PRO 360 processor @22W, Radeon™ 880M graphics, 32GB RAM, 1TB SSD, VBS=ON, Windows 11 Pro; Dell Latitude 7450 with Intel Core Ultra 7 165U processor @15W (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 32GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Professional; Dell Latitude 7450 with Intel Core Ultra 7 165H processor @28W (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 16GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Pro. The following applications were tested in Balanced Mode: Teams + Procyon Office Productivity, Teams + Procyon Office Productivity Excel, Teams + Procyon Office Productivity Outlook, Teams + Procyon Office Productivity Power Point, Teams + Procyon Office Productivity Word, Composite Geomean Score. Each Microsoft Teams call consists of 9 participants (3X3). Laptop manufactures may vary configurations yielding different results. STXP-10.
    Testing as of Sept 2024 by AMD performance labs using the following systems: (1) Lenovo ThinkPad T14s Gen 6 with an AMD Ryzen™ AI 7 PRO 360 processor (@22W), Radeon™ 880M graphics, 32GB RAM, 1TB SSD, VBS=ON, Windows 11 Pro; (2) Dell Latitude 7450 with Intel Core Ultra 7 165U processor (@15W) (vPro enabled), Intel Iris Xe Graphics, VBS=ON, 32GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Professional; and (3) Dell Latitude 7450 with Intel Core Ultra 7 165H processor (@28W) (vPro enabled), Intel Arc Graphics, VBS=ON, 16GB RAM, 512GB NVMe SSD, Microsoft Windows 11 Pro. Tested applications (in Balanced Mode) include: Procyon Office Productivity, Procyon Office Productivity Excel, Procyon Office Productivity Outlook, Procyon Office Productivity Power Point, Procyon Office Productivity Word, Composite Geomean Score. Laptop manufactures may vary configurations yielding different results. STXP-11.
    6 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.

    Media Contacts:
    Brandi Martina
    AMD Communications
    +1 512-705-1720 
    brandi.martina@amd.com

    Mitch Haws
    AMD Investor Relations
    +1 512-944-0790
    mitch.haws@amd.com

    The MIL Network

  • MIL-OSI: Origin Bancorp, Inc. Announces Third Quarter 2024 Earnings Release and Conference Call

    Source: GlobeNewswire (MIL-OSI)

    RUSTON, La., Oct. 10, 2024 (GLOBE NEWSWIRE) — Origin Bancorp, Inc. (NYSE: OBK) (“Origin”), the financial holding company for Origin Bank, plans to issue third quarter 2024 results after the market closes on Wednesday, October 23, 2024, and hold a conference call to discuss such results on Thursday, October 24, 2024, at 8:00 a.m. Central Time (9:00 a.m. Eastern Time). The conference call will be hosted by Drake Mills, Chairman, President and CEO of Origin, William J. Wallace, IV, Chief Financial Officer of Origin, and Lance Hall, President and CEO of Origin Bank.

    Conference Call and Live Webcast

    To participate in the live conference call, please dial +1 (929) 272-1574 (U.S. Local / International 1); +1 (857) 999-3259 (U.S. Local / International 2); +1 (800) 528-1066 (U.S. Toll Free), enter Conference ID: 84865 and request to be joined into the Origin Bancorp, Inc. (OBK) call. A simultaneous audio-only webcast may be accessed via Origin’s website at http://www.origin.bank under the investor relations, News & Events, Events & Presentations link or directly by visiting https://dealroadshow.com/e/ORIGINQ324.

    Conference Call Webcast Archive

    If you are unable to participate during the live webcast, the webcast will be archived on the Investor Relations section of Origin’s website at http://www.origin.bank, under Investor Relations, News & Events, Events & Presentations.

    About Origin Bancorp, Inc.

    Origin Bancorp, Inc. is a financial holding company headquartered in Ruston, Louisiana. Origin’s wholly owned bank subsidiary, Origin Bank, was founded in 1912 in Choudrant, Louisiana. Deeply rooted in Origin’s history is a culture committed to providing personalized relationship banking to businesses, municipalities, and personal clients to enrich the lives of the people in the communities it serves. Origin provides a broad range of financial services and currently operates more than 60 locations from Dallas/Fort Worth, East Texas and Houston, North Louisiana, Mississippi, South Alabama and the Florida Panhandle. For more information, visit http://www.origin.bank.

    Contact Information
    Investor Relations
    Chris Reigelman
    318-497-3177
    chris@origin.bank

    Media Contact
    Ryan Kilpatrick
    318-232-7472
    rkilpatrick@origin.bank

    The MIL Network

  • MIL-OSI: AMD Launches 5th Gen AMD EPYC CPUs, Maintaining Leadership Performance and Features for the Modern Data Center

    Source: GlobeNewswire (MIL-OSI)

    — New EPYC processors deliver record breaking performance and efficiency for a wide range of data center workloads —

    — AMD EPYC CPUs continue momentum, with more than 950 AMD EPYC-powered public instances available globally and more than 350 platforms from OxMs —

    SAN FRANCISCO, Oct. 10, 2024 (GLOBE NEWSWIRE) — AMD (NASDAQ: AMD) today announced the availability of the 5th Gen AMD EPYC™ processors, formerly codenamed “Turin,” the world’s best server CPU for enterprise, AI and cloud1.

    Using the “Zen 5” core architecture, compatible with the broadly deployed SP5 platform2 and offering a broad range of core counts spanning from 8 to 192, the AMD EPYC 9005 Series processors extend the record-breaking performance3 and energy efficiency of the previous generations with the top of stack 192 core CPU delivering up to 2.7X the performance4 compared to the competition.

    New to the AMD EPYC 9005 Series CPUs is the 64 core AMD EPYC 9575F, tailor made for GPU powered AI solutions that need the ultimate in host CPU capabilities. Boosting up to 5GHz5, compared to the 3.8GHz processor of the competition, it provides up to 28% faster processing needed to keep GPUs fed with data for demanding AI workloads.

    “From powering the world’s fastest supercomputers, to leading enterprises, to the largest Hyperscalers, AMD has earned the trust of customers who value demonstrated performance, innovation and energy efficiency,” said Dan McNamara, senior vice president and general manager, server business, AMD. “With five generations of on-time roadmap execution, AMD has proven it can meet the needs of the data center market and give customers the standard for data center performance, efficiency, solutions and capabilities for cloud, enterprise and AI workloads.”

    The World’s Best CPU for Enterprise, AI and Cloud Workloads

    Modern data centers run a variety of workloads, from supporting corporate AI-enablement initiatives, to powering large-scale cloud-based infrastructures to hosting the most demanding business-critical applications. The new 5th Gen AMD EPYC processors provide leading performance and capabilities for the broad spectrum of server workloads driving business IT today.

    The new “Zen 5” core architecture, provides up to 17% better instructions per clock (IPC) for enterprise and cloud workloads and up to 37% higher IPC in AI and high performance computing (HPC) compared to “Zen 4.”6

    With AMD EPYC 9965 processor-based servers, customers can expect significant impact in their real world applications and workloads compared to the Intel Xeon® 8592+ CPU-based servers, with:

    • Up to 4X faster time to results on business applications such as video transcoding.7
    • Up to 3.9X the time to insights for science and HPC applications that solve the world’s most challenging problems.8
    • Up to 1.6X the performance per core in virtualized infrastructure.9

    In addition to leadership performance and efficiency in general purpose workloads, 5th Gen AMD EPYC processors enable customers to drive fast time to insights and deployments for AI deployments, whether they are running a CPU or a CPU + GPU solution.

    Compared to the competition:

    • The 192 core EPYC 9965 CPU has up to 3.7X the performance on end-to-end AI workloads, like TPCx-AI (derivative), which are critical for driving an efficient approach to generative AI.10
    • In small and medium size enterprise-class generative AI models, like Meta’s Llama 3.1-8B, the EPYC 9965 provides 1.9X the throughput performance compared to the competition.11
    • Finally, the purpose built AI host node CPU, the EPYC 9575F, can use its 5GHz max frequency boost to help a 1,000 node AI cluster drive up to 700,000 more inference tokens per second. Accomplishing more, faster.12

    By modernizing to a data center powered by these new processors to achieve 391,000 units of SPECrate®2017_int_base general purpose computing performance, customers receive impressive performance for various workloads, while gaining the ability to use an estimated 71% less power and ~87% fewer servers13. This gives CIOs the flexibility to either benefit from the space and power savings or add performance for day-to-day IT tasks while delivering impressive AI performance.

    AMD EPYC CPUs – Driving Next Wave of Innovation
    The proven performance and deep ecosystem support across partners and customers have driven widespread adoption of EPYC CPUs to power the most demanding computing tasks. With leading performance, features and density, AMD EPYC CPUs help customers drive value in their data centers and IT environments quickly and efficiently.

    5thGen AMD EPYC Features
    The entire lineup of 5th Gen AMD EPYC processors is available today, with support from Cisco, Dell, Hewlett Packard Enterprise, Lenovo and Supermicro as well as all major ODMs and cloud service providers providing a simple upgrade path for organizations seeking compute and AI leadership.

    High level features of the AMD EPYC 9005 series CPUs include:

    • Leadership core count options from 8 to 192, per CPU
    • “Zen 5” and “Zen 5c” core architectures
    • 12 channels of DDR5 memory per CPU
    • Support for up to DDR5-6400 MT/s14
    • Leadership boost frequencies up to 5GHz5
    • AVX-512 with the full 512b data path
    • Trusted I/O for Confidential Computing, and FIPS certification in process for every part in the series
    Model
    (AMD EPYC)
    Cores CCD
    (Zen5/Zen5c)
    Base/Boost5
    (up to GHz)
    Default
    TDP (W)
    L3 Cache
    (MB)
    Price
    (1 KU, USD)
    9965 192 cores “Zen5c” 2.25 / 3.7 500W 384 $14,813
    9845 160 cores “Zen5c” 2.1 / 3.7 390W 320 $13,564
    9825 144 cores “Zen5c” 2.2 / 3.7 390W 384 $13,006
    9755
    9745
    128 cores “Zen5”
    “Zen5c”
    2.7 / 4.1
    2.4 / 3.7
    500W
    400W
    512
    256
    $12,984
    $12,141
    9655
    9655P
    9645
    96 cores “Zen5”
    “Zen5”
    “Zen5c”
    2.6 / 4.5
    2.6 / 4.5
    2.3 / 3.7
    400W
    400W
    320W
    384
    384
    384
    $11,852
    $10,811
    $11,048
    9565 72 cores “Zen5” 3.15 / 4.3 400W 384 $10,486
    9575F
    9555
    9555P
    9535
    64 cores “Zen5”
    “Zen5”
    “Zen5”
    “Zen5”
    3.3 / 5.0
    3.2 / 4.4
    3.2 / 4.4
    2.4 / 4.3
    400W
    360W
    360W
    300W
    256
    256
    256
    256
    $11,791
    $9,826
    $7,983
    $8,992
    9475F
    9455
    9455P
    48 cores “Zen5”
    “Zen5”
    “Zen5”
    3.65 / 4.8
    3.15 / 4.4
    3.15 / 4.4
    400W
    300W
    300W
    256
    192
    192
    $7,592
    $5,412
    $4,819
    9365 36 cores “Zen5” 3.4 / 4.3 300W 256 $4,341
    9375F
    9355
    9355P
    9335
    32 cores “Zen5”
    “Zen5”
    “Zen5”
    “Zen5”
    3.8 / 4.8
    3.55 / 4.4
    3.55 / 4.4
    3.0 / 4.4
    320W
    280W
    280W
    210W
    256
    256
    256
    256
    $5,306
    $3,694
    $2,998
    $3,178
    9275F
    9255
    24 cores “Zen5”
    “Zen5”
    4.1 / 4.8
    3.25 / 4.3
    320W
    200W
    256
    128
    $3,439
    $2,495
    9175F
    9135
    9115
    16 cores “Zen5”
    “Zen5”
    “Zen5”
    4.2 / 5.0
    3.65 / 4.3
    2.6 / 4.1
    320W
    200W
    125W
    512
    64
    64
    $4,256
    $1,214
    $726
    9015 8 cores “Zen5” 3.6 / 4.1 125W 64 $527

    Supporting Resources

    About AMD
    For more than 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Billions of people, leading Fortune 500 businesses, and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) websiteblogLinkedIn and X pages.

    Cautionary Statement
    This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including AMD EPYC™ processors, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as “would,” “may,” “expects,” “believes,” “plans,” “intends,” “projects” and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD’s control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: Intel Corporation’s dominance of the microprocessor market and its aggressive business practices; Nvidia’s dominance in the graphics processing unit market and its aggressive business practices; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; loss of a significant customer; competitive markets in which AMD’s products are sold; economic and market uncertainty; quarterly and seasonal sales patterns; AMD’s ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD’s products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD’s ability to introduce products on a timely basis with expected features and performance levels; AMD’s ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD’s reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD’s reliance on Microsoft and other software vendors’ support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD’s supply chain; AMD’s ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; long-term impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, tariffs and trade protection measures; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or investments on AMD’s business and AMD’s ability to integrate acquired businesses;  impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain qualified personnel; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.

    AMD, the AMD Arrow logo, EPYC and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

    1 EPYC-029C: Comparison based on thread density, performance, features, process technology and built-in security features of currently shipping servers as of 10/10/2024. EPYC 9005 series CPUs offer the highest thread density [EPYC-025B], leads the industry with 500+ performance world records [EPYC-023F] with performance world record enterprise leadership Java® ops/sec performance [EPYCWR-20241010-260], top HPC leadership with floating-point throughput performance [EPYCWR-2024-1010-381], AI end-to-end performance with TPCx-AI performance [EPYCWR-2024-1010-525] and highest energy efficiency scores [EPYCWR-20241010-326]. The 5th Gen EPYC series also has 50% more DDR5 memory channels [EPYC-033C] with 70% more memory bandwidth [EPYC-032C] and supports 70% more PCIe® Gen5 lanes for I/O throughput [EPYC-035C], has up to 5x the L3 cache/core [EPYC-043C] for faster data access, uses advanced 3-4nm technology, and offers Secure Memory Encryption + Secure Encrypted Virtualization (SEV) + SEV Encrypted State + SEV-Secure Nested Paging security features. See the AMD EPYC Architecture White Paper (https://library.amd.com/l/3f4587d147382e2/) for more information.

    2 AMD EPYC™ 9005 processors utilize the SP5 socket. Many factors determine system compatibility. Check with your server manufacturer to determine if this processor is supported in systems configured with previously launched AMD EPYC 9004 family CPUs.

    3 EPYC-022F: For a complete list of world records see: http://amd.com/worldrecords.

    4 9xx5-002C: SPECrate®2017_int_base comparison based on published scores from http://www.spec.org as of 10/10/2024.

    2P AMD EPYC 9965 (3000 SPECrate®2017_int_base, 384 Total Cores, 500W TDP, $14,813 CPU $), 6.060 SPECrate®2017_int_base/CPU W, 0.205 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q3/cpu2017-20240923-44833.html)

    2P AMD EPYC 9755 (2720 SPECrate®2017_int_base, 256 Total Cores, 500W TDP, $12,984 CPU $), 5.440 SPECrate®2017_int_base/CPU W, 0.209 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q4/cpu2017-20240923-44837.pdf)

    2P AMD EPYC 9754 (1950 SPECrate®2017_int_base, 256 Total Cores, 360W TDP, $11,900 CPU $), 5.417 SPECrate®2017_int_base/CPU W, 0.164 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2023q2/cpu2017-20230522-36617.html)

    2P AMD EPYC 9654 (1810 SPECrate®2017_int_base, 192 Total Cores, 360W TDP, $11,805 CPU $), 5.028 SPECrate®2017_int_base/CPU W, 0.153 SPECrate®2017_int_base/CPU $, https://www.spec.org/cpu2017/results/res2024q1/cpu2017-20240129-40896.html)

    2P Intel Xeon Platinum 8592+ (1130 SPECrate®2017_int_base, 128 Total Cores, 350W TDP, $11,600 CPU $) 3.229 SPECrate®2017_int_base/CPU W, 0.097 SPECrate®2017_int_base/CPU $, http://spec.org/cpu2017/results/res2023q4/cpu2017-20231127-40064.html)

    2P Intel Xeon 6780E (1410 SPECrate®2017_int_base, 288 Total Cores, 330W TDP, $11,350 CPU $) 4.273 SPECrate®2017_int_base/CPU W, 0.124 SPECrate®2017_int_base/CPU $, https://spec.org/cpu2017/results/res2024q3/cpu2017-20240811-44406.html)

    SPEC®, SPEC CPU®, and SPECrate® are registered trademarks of the Standard Performance Evaluation Corporation. See http://www.spec.org for more information. Intel CPU TDP at https://ark.intel.com/.

    5 GD-150: Boost Clock Frequency is the maximum frequency achievable on the CPU running a bursty workload. Boost clock achievability, frequency, and sustainability will vary based on several factors, including but not limited to: thermal conditions and variation in applications and workloads. GD-150.

    6 9xx5-001: Based on AMD internal testing as of 9/10/2024, geomean performance improvement (IPC) at fixed-frequency.

    – 5th Gen EPYC CPU Enterprise and Cloud Server Workloads generational IPC Uplift of 1.170x (geomean) using a select set of 36 workloads and is the geomean of estimated scores for total and all subsets of SPECrate®2017_int_base (geomean ), estimated scores for total and all subsets of SPECrate®2017_fp_base (geomean), scores for Server Side Java multi instance max ops/sec, representative Cloud Server workloads (geomean), and representative Enterprise server workloads (geomean).

    “Genoa” Config (all NPS1): EPYC 9654 BIOS TQZ1005D 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-4800 (2Rx4 64GB), 32Gbps xGMI;

    “Turin” config (all NPS1): EPYC 9V45 BIOS RVOT1000F 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-6000 (2Rx4 64GB), 32Gbps xGMI

    Utilizing Performance Determinism and the Performance governor on Ubuntu® 22.04 w/ 6.8.0-40-generic kernel OS for all workloads.

    – 5th Gen EPYC generational ML/HPC Server Workloads IPC Uplift of 1.369x (geomean) using a select set of 24 workloads and is the geomean of representative ML Server Workloads (geomean), and representative HPC Server Workloads (geomean).

    “Genoa” Config (all NPS1) “Genoa” config: EPYC 9654 BIOS TQZ1005D 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-4800 (2Rx4 64GB), 32Gbps xGMI;

    “Turin” config (all NPS1): EPYC 9V45 BIOS RVOT1000F 12c12t (1c1t/CCD in 12+1), FF 3GHz, 12x DDR5-6000 (2Rx4 64GB), 32Gbps xGMI

    Utilizing Performance Determinism and the Performance governor on Ubuntu 22.04 w/ 6.8.0-40-generic kernel OS for all workloads except LAMMPS, HPCG, NAMD, OpenFOAM, Gromacs which utilize 24.04 w/ 6.8.0-40-generic kernel.

    SPEC® and SPECrate® are registered trademarks for Standard Performance Evaluation Corporation. Learn more at spec.org.

    7 9xx5-006: AMD internal testing as of 09/01/2024, on FFMPEG (Raw to VP9, 1080P, 302 Frames, 1 instance/thread, video source: https://media.xiph.org/video/derf/y4m/ducks_take_off_1080p50.y4m).

    System Configurations: 2P AMD EPYC™ 9965 reference system (2 x 192C) 1.5TB 24x64GB DDR5-6400 running at 6000MT/s, SAMSUNG MZWLO3T8HCLS-00A07, NPS=4, Ubuntu 22.04.3 LTS, Kernel Linux 5.15.0-119-generic, BIOS RVOT1000C (determinism enable=power), 10825484.25 Frames/Hour Median

    2P AMD EPYC™ 9654 production system (2 x 96C) 1.5TB 24x64GB DDR5-5600, , SAMSUNG MO003200KYDNC, NPS=4, Ubuntu 22.04.3 LTS, Kernel Linux 5.15.0-119-generic, BIOS 1.56 (determinism enable=power) , 5154133.333 Frames/Hour Median

    2P Intel Xeon Platinum 8592+ production system (2 x 64C) 1TB 16x64GB DDR5-5600, 3.2 TB NVME, Ubuntu 22.04.3 LTS, Kernel Linux 6.5.0-35-generic), BIOS ESE122V-3.10, 2712701.754 Frames/Hour Median

    For 3.99x the performance with the AMD EPYC 9965 vs Intel Xeon Platinum 8592+ systems

    For 1.90x the performance with the AMD EPYC 9654 vs Intel Xeon Platinum 8592+ systems

    Results may vary based on factors including but not limited to BIOS and OS settings and versions, software versions and data used.

    8 9xx5-022: Source: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/performance-briefs/amd-epyc-9005-pb-gromacs.pdf

    9 9xx5-071: VMmark® 4.0.1 host/node FC SAN comparison based on “independently published” results as of 10/10/2024.  
    Configurations:

    2 node, 2P AMD EPYC 9575F (128 total cores) powered server running VMware ESXi8.0 U3, 3.31 @ 4 tiles,
    https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1003

    2 node, 2P AMD EPYC 9554 (128 total cores) powered server running VMware ESXi 8.0 U3, 2.64 @ 3 tiles,
    https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1002

    2 node, 2P Intel Xeon Platinum 8592+ (128 total cores) powered server running VMware ESXi 8.0 U3, 2.06 @ 2.4 Tiles,
    https://www.infobellit.com/BlueBookSeries/VMmark4-FDR-1001

    VMmark is a registered trademark of VMware in the US or other countries.

    10 9xx5-012: TPCxAI @SF30 Multi-Instance 32C Instance Size throughput results based on AMD internal testing as of 09/05/2024 running multiple VM instances. The aggregate end-to-end AI throughput test is derived from the TPCx-AI benchmark and as such is not comparable to published TPCx-AI results, as the end-to-end AI throughput test results do not comply with the TPCx-AI Specification.

    2P AMD EPYC 9965 (384 Total Cores), 12 32C instances, NPS1, 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT1000C (SMT=off, Determinism=Power, Turbo Boost=Enabled)

    2P AMD EPYC 9755 (256 Total Cores), 8 32C instances, NPS1, 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu 22.04.4 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT0090F (SMT=off, Determinism=Power, Turbo Boost=Enabled)

    2P AMD EPYC 9654 (192 Total cores) 6 32C instances, NPS1, 1.5TB 24x64GB DDR5-4800, 1DPC, 2 x 1.92 TB Samsung MZQL21T9HCJR-00A07 NVMe, Ubuntu 22.04.3 LTS, BIOS 1006C (SMT=off, Determinism=Power)

    Versus 2P Xeon Platinum 8592+ (128 Total Cores), 4 32C instances, AMX On, 1TB 16x64GB DDR5-5600, 1DPC, 1.0 Gbps NetXtreme BCM5719 Gigabit Ethernet PCIe, 3.84 TB KIOXIA KCMYXRUG3T84 NVMe, , Ubuntu 22.04.4 LTS, 6.5.0-35 generic (tuned-adm profile throughput-performance, ulimit -l 132065548, ulimit -n 1024, ulimit -s 8192), BIOS ESE122V (SMT=off, Determinism=Power, Turbo Boost = Enabled)

    Results:

    CPU Median Relative Generational
    Turin 192C, 12 Inst 6067.531 3.775 2.278
    Turin 128C, 8 Inst 4091.85 2.546 1.536
    Genoa 96C, 6 Inst 2663.14 1.657 1
    EMR 64C, 4 Inst 1607.417 1 NA

    Results may vary due to factors including system configurations, software versions and BIOS settings. TPC, TPC Benchmark and TPC-C are trademarks of the Transaction Processing Performance Council.

    11 9xx5-009: Llama3.1-8B throughput results based on AMD internal testing as of 09/05/2024.

    Llama3-8B configurations: IPEX.LLM 2.4.0, NPS=2, BF16, batch size 4, Use Case Input/Output token configurations: [Summary = 1024/128, Chatbot = 128/128, Translate = 1024/1024, Essay = 128/1024, Caption = 16/16].

    2P AMD EPYC 9965 (384 Total Cores), 6 64C instances 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1 DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.3 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192) , BIOS RVOT1000C, (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2

    2P AMD EPYC 9755 (256 Total Cores), 4 64C instances , 1.5TB 24x64GB DDR5-6400 (at 6000 MT/s), 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu 22.04.3 LTS, 6.8.0-40-generic (tuned-adm profile throughput-performance, ulimit -l 198096812, ulimit -n 1024, ulimit -s 8192), BIOS RVOT1000C (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2

    2P AMD EPYC 9654 (192 Total Cores) 4 48C instances , 1.5TB 24x64GB DDR5-4800, 1DPC, 1.0 Gbps NetXtreme BCM5720 Gigabit Ethernet PCIe, 3.5 TB Samsung MZWLO3T8HCLS-00A07 NVMe®, Ubuntu® 22.04.4 LTS, 5.15.85-051585-generic (tuned-adm profile throughput-performance, ulimit -l 1198117616, ulimit -n 500000, ulimit -s 8192), BIOS RVI1008C (SMT=off, Determinism=Power, Turbo Boost=Enabled), NPS=2

    Versus 2P Xeon Platinum 8592+ (128 Total Cores), 2 64C instances , AMX On, 1TB 16x64GB DDR5-5600, 1DPC, 1.0 Gbps NetXtreme BCM5719 Gigabit Ethernet PCIe, 3.84 TB KIOXIA KCMYXRUG3T84 NVMe®, Ubuntu 22.04.4 LTS 6.5.0-35-generic (tuned-adm profile throughput-performance, ulimit -l 132065548, ulimit -n 1024, ulimit -s 8192), BIOS ESE122V (SMT=off, Determinism=Power, Turbo Boost = Enabled).
    Results:

    CPU 2P EMR 64c 2P Turin 192c 2P Turin 128c 2P Genoa 96c
    Average Aggregate Median Total Throughput 99.474 193.267 182.595 138.978
    Competitive 1 1.943 1.836 1.397
    Generational NA 1.391 1.314 1

    Results may vary due to factors including system configurations, software versions and BIOS settings.

    12 9xx5-087: As of 10/10/2024; this scenario contains several assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing.

    Referencing 9XX5-056A: “2P AMD EPYC 9575F powered server and 8x AMD Instinct MI300X GPUs running Llama3.1-70B select inference workloads at FP8 precision vs 2P Intel Xeon Platinum 8592+ powered server and 8x AMD Instinct MI300X GPUs has ~8% overall throughput increase across select inference use cases” and 8763.52 tokens/s (9575F) versus 8,048.48 tokens/s (8592+) at 128 input / 2048 output tokens, 500 prompts for 1.089x the tokens/s or 715.04 more tokens/s.

    1 Node = 2 CPUs and 8 GPUs.
    Assuming a 1000 node cluster, 1000 * 715.04 = 715,040 tokens/s

    For ~700,000 more tokens/s

    Results may vary due to factors including system configurations, software versions and BIOS settings.

    13 9xx5TCO-001a: This scenario contains many assumptions and estimates and, while based on AMD internal research and best approximations, should be considered an example for information purposes only, and not used as a basis for decision making over actual testing. The AMD Server & Greenhouse Gas Emissions TCO (total cost of ownership) Estimator Tool – version 1.12, compares the selected AMD EPYC™ and Intel® Xeon® CPU based server solutions required to deliver a TOTAL_PERFORMANCE of 39100 units of SPECrate2017_int_base performance as of October 10, 2024. This scenario compares a legacy 2P Intel Xeon 28 core Platinum_8280 based server with a score of 391 versus 2P EPYC 9965 (192C) powered server with an score of 3030 (https://spec.org/cpu2017/results/res2024q3/cpu2017-20240923-44833.pdf) along with a comparison upgrade to a 2P Intel Xeon Platinum 8592+ (64C) based server with a score of 1130 (https://spec.org/cpu2017/results/res2024q3/cpu2017-20240701-43948.pdf). Actual SPECrate®2017_int_base score for 2P EPYC 9965 will vary based on OEM publications.

    Environmental impact estimates made leveraging this data, using the Country / Region specific electricity factors from the 2024 International Country Specific Electricity Factors 10 – July 2024 , and the United States Environmental Protection Agency ‘Greenhouse Gas Equivalencies Calculator’.

    For additional details, see https://www.amd.com/en/claims/epyc5#9xx5TCO-001a

    14 9xx5-083: 5th Gen EPYC processors support DDR5-6400 MT/s for targeted customers and configurations. 5th Gen production SKUs support up to DDR5-6000 MT/s to enable a broad set of DIMMs across all OEM platforms and maintain SP5 platform compatibility

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/3bb614ee-e307-43a7-a36b-f5bd02ed1335

    The MIL Network

  • MIL-OSI: VINCENT GELLE APPOINTED DEPUTY CHIEF EXECUTIVE OFFICER OF MOBILIZE FINANCIAL SERVICES, RCI BANQUE’S COMMERCIAL BRAND

    Source: GlobeNewswire (MIL-OSI)

    October 10th, 2024

    PRESS RELEASE

    VINCENT GELLE APPOINTED DEPUTY CHIEF EXECUTIVE OFFICER OF MOBILIZE FINANCIAL SERVICES, RCI BANQUE’S COMMERCIAL BRAND

    Mobilize Financial Services announces the appointment of Vincent Gellé as Deputy Chief Executive Officer, effective October 4 th.

    This appointment is part of the new organization sought by Martin Thomas to ensure that Mobilize Financial Services, the financial arm of the Renault Group brands, meets the challenges of the sector and strengthens its position as market leader.

    Martin Thomas, CEO, Mobilize Financial Services: “Mobilize Financial Services is giving itself the means to write a new chapter in its development in a particularly demanding context. I’m delighted that Vincent Gellé, who has worked his way up through the Group in a variety of positions both in France and internationally, can continue to bring us his expertise in this new role.”

            

    Born in 1978, Vincent Gellé graduated from ESSEC business school in 2000. He joined RCI Banque in 2001, holding a number of financial and commercial positions in France and abroad.
    He began his career in the UK in 2001 with Renault Financial Services, before joining RCI Banque’s head office in 2005 as Financial Controller. From 2008, Vincent Gellé successively held the positions of Administrative and Financial Director in South Korea, then Group Performance Control Director. In 2016, he continued his career in Japan with Nissan’s Finance Department, then in Russia as Sales & Martketing Director of RN Bank.
    He then joined Mobilize Financial Services headquarters in France, where he has held the role of VP, Accounting and Group Performance Control since August 2023. He is a member of the RCI Banque Executive Committee.

    About Mobilize Financial Services  
    Attentive to the needs of all its customers, Mobilize Financial Services, a subsidiary of Renault Group, creates innovative financial services to build sustainable mobility for all. Mobilize Financial Services, which began operations nearly 100 years ago, is the commercial brand of RCI Banque SA, a French bank specializing in automotive financing and services for customers and networks of Renault Group, and also for the brands Nissan and Mitsubishi in several countries.  
    With operations in 35 countries and nearly 4,000 employees, Mobilize Financial Services financed more than 1,2 million contracts (new and used vehicles) in 2023 and sold 3,9 million services. At the end of June 2024, average earning assets stood at 54,9 billion euros of financing and pre-tax earnings at 553 million euros.   
    Since 2012, the Group has deployed a deposit-taking business in several countries. At the end of June 2024, net deposits amounted to 29,4 billion euros, or 50 % of the company’s net assets.   
    To find out more about Mobilize Financial Services: http://www.mobilize-fs.com/  
    Follow us on Twitter: @Mobilize_FS 

    Attachment

    The MIL Network

  • MIL-OSI: Alliance Witan PLC – Net Asset Value

    Source: GlobeNewswire (MIL-OSI)

    ALLIANCE WITAN PLC
                     
    At the close of business on Wednesday 09 October 2024:

    The Company’s NAV per ordinary share, valued on a bid price basis with Debt at Par, was

    –       excluding income, 1265.9p
                     
    –       including income, 1266.7p
      
    The Company’s NAV per ordinary share, valued on a bid price basis with Debt at Fair Value, was

    –       excluding income, 1283.2p

    –       including income, 1284.0p

    For further information, please contact: –

     
    Juniper Partners Limited
    Tel. +44 (0)131 378 0500

    Notes

    1. Net Asset Values are calculated in accordance with published accounting policies and AIC guidelines.
    2. The fair value of the Company’s fixed loan notes is calculated by reference to a benchmark gilt.

    The MIL Network

  • MIL-OSI: MEF Launches CIM Service API to Advance Enterprise Connectivity

    Source: GlobeNewswire (MIL-OSI)

    LOS ANGELES, Oct. 10, 2024 (GLOBE NEWSWIRE) — MEF, a global industry association of network, cloud, security, and technology providers accelerating enterprise digital transformation, today announced its new Lifecycle Service Orchestration (LSO) Circuit Impairment & Maintenance (CIM) Service API designed to enable service providers to automate and standardize how network circuit impairments and scheduled maintenance are communicated to enterprises. Developed in collaboration with MEF’s Enterprise Leadership Council, the LSO CIM Service API is a transformative solution addressing one of the most critical gaps in enterprise connectivity management—delivering real-time, automated notifications about service disruptions and maintenance activities across complex, multi-provider networks.

    The CIM Service API is part of MEF’s portfolio of LSO enterprise and operational automation APIs, a powerful suite that empowers enterprises to seamlessly perform automated business and operations with their service providers. Aligned with MEF’s Network-as-a-Service (NaaS) Industry Blueprint, the comprehensive enterprise portfolio features a range of assets to fuel the advancement of NaaS for enterprises. Enabling enterprise automation systems to interact with service provider networks enhances real-time responsiveness and service continuity for enterprises. By bridging networks and applications, MEF is positioning itself as a key driver in enterprise connectivity transformation.

    Enterprises today rely on diverse network circuits to maintain connectivity between locations, cloud services, and critical infrastructure. However, the industry’s standard for conveying impairment and maintenance information has been limited to manual email communications, often resulting in delays and operational disruptions. MEF’s LSO CIM Service API eliminates these inefficiencies by automating and standardizing notifications, allowing enterprises to make faster, more informed decisions.

    Key Features of the LSO CIM Service API:

    • Real-Time Updates: Enterprises receive proactive notifications on impairments, maintenance windows, and incident resolutions, improving response times.
    • Automated Efficiency: Leveraging MEF’s LSO API Framework, CIM notifications are delivered through APIs, reducing manual processing.
    • Comprehensive Visibility: Detailed information on incident severity and resolution timelines helps enterprises optimize operational agility.
    • Industry-Wide Standardization: The LSO CIM Service API is standardized, eliminating the need to implement unique API automation for each provider.

    “MEF’s LSO CIM Service API fills a crucial void in enterprise network management by delivering real-time insights into network performance. This is a game-changer for businesses that depend on the seamless operation of multiple circuits across diverse service providers around the globe,” said Sunill Khandekar, Chief Enterprise Development Officer, MEF. “Standardizing the way impairment and maintenance updates are communicated enables businesses to stay ahead of disruptions and optimize their connectivity management. The LSO CIM Service API is poised to become an essential part of network connectivity for enterprises worldwide as they continue to scale their digital infrastructure and services.”

    Impact on Enterprises
    With the LSO CIM Service API, enterprises gain the ability to proactively manage their network circuits, minimizing downtime and improving service quality. The automated system ensures that critical updates reach the right teams quickly, driving operational efficiency and reducing the risk of business disruption.

    The CIM Service API leverages MEF’s LSO Trouble Ticket and Incident Management API and can be integrated into existing enterprise network management systems. Its standardized approach is designed to drive widespread industry adoption, enabling enterprises to request the inclusion of CIM services in their Requests for Information (RFIs) and Requests for Proposals (RFPs).

    Live Demonstration at MEF’s Global NaaS Event (GNE)
    MEF’s GNE 2024 in Dallas, 28–30 Oct will feature a live demonstration of the LSO CIM Service API, showcasing how AT&T, Prodapt, and Verizon work with enterprise customers Bloomberg, UPMC, and Williams-Sonoma. The demo will highlight how real-time notifications on circuit impairments and scheduled maintenance can enhance operational efficiency and continuity, demonstrating the power of MEF’s standardized APIs.

    Find more information on MEF’s LSO CIM Service API and the LSO Marketplace here. To learn more about how MEF is driving network transformation and enabling dynamic services across a global ecosystem of automated networks, visit http://www.MEF.net.

    For more information about GNE registration and sponsor opportunities visit: https://gne.mef.net.

    About MEF
    MEF is a global consortium of service, cloud, cybersecurity, and technology providers collaborating to accelerate enterprise digital transformation. It delivers standards-based frameworks, services, technologies, APIs, and certification programs to enable Network-as-a-Service (NaaS) across an automated ecosystem. MEF is the defining authority for certified Lifecycle Service Orchestration (LSO) business and operational APIs and Carrier Ethernet, SASE, SD-WAN, Zero Trust, and Security Service Edge (SSE) technologies and services. MEF’s Global NaaS Event (GNE) convenes industry leaders building and delivering the next generation of NaaS solutions. For more information about MEF, visit MEF.net and follow us on LinkedIn and Twitter

    Media Contact:
    Melissa Power
    MEF
    pr@mef.net

    The MIL Network

  • MIL-OSI: Publication of a Circular – Notice of General Meeting

    Source: GlobeNewswire (MIL-OSI)

    NOT FOR RELEASE, PUBLICATION OR DISTRIBUTION IN WHOLE OR IN PART, DIRECTLY OR INDIRECTLY, IN, INTO OR FROM THE UNITED STATES, CANADA, AUSTRALIA, JAPAN, THE REPUBLIC OF SOUTH AFRICA OR ANY OTHER JURISDICTION WHERE TO DO SO WOULD CONSTITUTE A VIOLATION OF THE RELEVANT LAWS OR REGULATIONS OF THAT JURISDICTION. THE INFORMATION CONTAINED HEREIN DOES NOT CONSTITUTE AN OFFER OF SECURITIES FOR SALE IN ANY JURISDICTION, INCLUDING IN THE UNITED STATES, CANADA, AUSTRALIA, JAPAN OR THE REPUBLIC OF SOUTH AFRICA.

    HARGREAVE HALE AIM VCT PLC

    LEI: 213800LRYA19A69SIT31

    10 October 2024

    Publication of a circular

    On 9 October 2024, Hargreave Hale AIM VCT plc (the “Company“) launched an offer for subscription to raise up to £20 million (the “Offer“).

    The Company has also published a circular convening a general meeting (the “General Meeting“) to be held at 9.30 a.m. on 12 November 2024 at the offices of Canaccord Genuity Asset Management Limited, 88 Wood Street, London EC2V 7QR (the “Circular“). At the General Meeting, shareholders will be asked to approve: (i) share issuance authorities in relation to the Offer; and (ii) amendments to the Company’s articles of association in order to extend the date of the next continuation vote to the annual general meeting of the Company to be held in 2031.

    The Circular is available to download from the Company’s website, http://www.hargreaveaimvcts.co.uk, subject to certain access restrictions and will also shortly be available for inspection at the National Storage Mechanism, https://data.fca.org.uk/#/nsm/nationalstoragemechanism .

    For further information please contact:
    Oliver Bedford, Canaccord Genuity Asset Management Limited
    Tel: 020 7523 4837

    Important Information
    This announcement and the information contained herein is not intended to, and does not, constitute or form part of any offer, invitation, or the solicitation of an offer, to purchase, otherwise acquire, subscribe for, sell or otherwise dispose of any securities or the solicitation of any vote or approval in any jurisdiction.

    The distribution of this announcement in jurisdictions other than the United Kingdom and the availability of the Offer to persons who are not resident in the United Kingdom may be affected by the laws of relevant jurisdictions. Therefore any persons who are subject to the laws of any jurisdiction other than the United Kingdom will need to inform themselves about, and observe, any applicable requirements.

    The MIL Network

  • MIL-OSI: Societe Generale: shares and voting rights as of 30 September 2024

    Source: GlobeNewswire (MIL-OSI)

    NUMBER OF SHARES COMPOSING CURRENT SHARE CAPITAL AND TOTAL NUMBER OF VOTING RIGHTS AS OF 30 SEPTEMBER 2024

    Regulated Information

    Paris, 10 October 2024

    Information about the total number of voting rights and shares pursuant to Article L.233-8 II of the French Commercial Code and Article 223-16 of the AMF General Regulations.

    Date Number of shares composing current share capital Total number of
    voting rights

    30 September 2024

    800,316,777

    Gross: 886,278,991

    Press contact:

    Jean-Baptiste Froville_+33 1 58 98 68 00_ jean-baptiste.froville@socgen.com
    Fanny Rouby_+33 1 57 29 11 12_ fanny.rouby@socgen.com

    Societe Generale

    Societe Generale is a top tier European Bank with more than 126,000 employees serving about 25 million clients in 65 countries across the world. We have been supporting the development of our economies for nearly 160 years, providing our corporate, institutional, and individual clients with a wide array of value-added advisory and financial solutions. Our long-lasting and trusted relationships with the clients, our cutting-edge expertise, our unique innovation, our ESG capabilities and leading franchises are part of our DNA and serve our most essential objective – to deliver sustainable value creation for all our stakeholders.

    The Group runs three complementary sets of businesses, embedding ESG offerings for all its clients:

    • French Retail, Private Banking and Insurance, with leading retail bank SG and insurance franchise, premium private banking services, and the leading digital bank BoursoBank.
    • Global Banking and Investor Solutions, a top tier wholesale bank offering tailored-made solutions with distinctive global leadership in equity derivatives, structured finance and ESG.
    • Mobility, International Retail Banking and Financial Services, comprising well-established universal banks (in Czech Republic, Romania and several African countries), Ayvens (the new ALD I LeasePlan brand), a global player in sustainable mobility, as well as specialized financing activities.

    Committed to building together with its clients a better and sustainable future, Societe Generale aims to be a leading partner in the environmental transition and sustainability overall. The Group is included in the principal socially responsible investment indices: DJSI (Europe), FTSE4Good (Global and Europe), Bloomberg Gender-Equality Index, Refinitiv Diversity and Inclusion Index, Euronext Vigeo (Europe and Eurozone), STOXX Global ESG Leaders indexes, and the MSCI Low Carbon Leaders Index (World and Europe).

    For more information, you can follow us on Twitter/X @societegenerale or visit our website societegenerale.com.

    Attachment

    The MIL Network

  • MIL-OSI: Form 8.3 – [LEARNING TECHNOLOGIES GROUP PLC – 09 10 2024] – (CGWL)

    Source: GlobeNewswire (MIL-OSI)

    FORM 8.3

    PUBLIC OPENING POSITION DISCLOSURE/DEALING DISCLOSURE BY
    A PERSON WITH INTERESTS IN RELEVANT SECURITIES REPRESENTING 1% OR MORE
    Rule 8.3 of the Takeover Code (the “Code”)

    1.        KEY INFORMATION

    (a)   Full name of discloser: CANACCORD GENUITY WEALTH LIMITED (for Discretionary clients)
    (b)   Owner or controller of interests and short positions disclosed, if different from 1(a):
            The naming of nominee or vehicle companies is insufficient. For a trust, the trustee(s), settlor and beneficiaries must be named.
    N/A
    (c)   Name of offeror/offeree in relation to whose relevant securities this form relates:
            Use a separate form for each offeror/offeree
    LEARNING TECHNOLOGIES GROUP PLC
    (d)   If an exempt fund manager connected with an offeror/offeree, state this and specify identity of offeror/offeree: N/A
    (e)   Date position held/dealing undertaken:
            For an opening position disclosure, state the latest practicable date prior to the disclosure
    09 OCTOBER 2024
    (f)   In addition to the company in 1(c) above, is the discloser making disclosures in respect of any other party to the offer?
            If it is a cash offer or possible cash offer, state “N/A”
    N/A

    2.        POSITIONS OF THE PERSON MAKING THE DISCLOSURE

    If there are positions or rights to subscribe to disclose in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 2(a) or (b) (as appropriate) for each additional class of relevant security.

    (a)      Interests and short positions in the relevant securities of the offeror or offeree to which the disclosure relates following the dealing (if any)

    Class of relevant security: 0.375p ORDINARY
      Interests Short positions
    Number % Number %
    (1)   Relevant securities owned and/or controlled: 10,120,950 1.2776    
    (2)   Cash-settled derivatives:        
    (3)   Stock-settled derivatives (including options) and agreements to purchase/sell:        
    TOTAL: 10,120,950 1.2776    

    All interests and all short positions should be disclosed.

    Details of any open stock-settled derivative positions (including traded options), or agreements to purchase or sell relevant securities, should be given on a Supplemental Form 8 (Open Positions).

    (b)      Rights to subscribe for new securities (including directors’ and other employee options)

    Class of relevant security in relation to which subscription right exists:  
    Details, including nature of the rights concerned and relevant percentages:  

    3.        DEALINGS (IF ANY) BY THE PERSON MAKING THE DISCLOSURE

    Where there have been dealings in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 3(a), (b), (c) or (d) (as appropriate) for each additional class of relevant security dealt in.

    The currency of all prices and other monetary amounts should be stated.

    (a)        Purchases and sales

    Class of relevant security Purchase/sale Number of securities Price per unit
    0.375p ORDINARY SALE 4,700 93p
    0.375p ORDINARY SALE 13,750 93.1p
    0.375p ORDINARY SALE 10,000 93.121p
    0.375p ORDINARY PURCHASE 1,115 93.455p

    (b)        Cash-settled derivative transactions

    Class of relevant security Product description
    e.g. CFD
    Nature of dealing
    e.g. opening/closing a long/short position, increasing/reducing a long/short position
    Number of reference securities Price per unit
    NONE        

    (c)        Stock-settled derivative transactions (including options)

    (i)        Writing, selling, purchasing or varying

    Class of relevant security Product description e.g. call option Writing, purchasing, selling, varying etc. Number of securities to which option relates Exercise price per unit Type
    e.g. American, European etc.
    Expiry date Option money paid/ received per unit
    NONE              

    (ii)        Exercise

    Class of relevant security Product description
    e.g. call option
    Exercising/ exercised against Number of securities Exercise price per unit

    (d)        Other dealings (including subscribing for new securities)

    Class of relevant security Nature of dealing
    e.g. subscription, conversion
    Details Price per unit (if applicable)
    NONE      

    4.        OTHER INFORMATION

    (a)        Indemnity and other dealing arrangements

    Details of any indemnity or option arrangement, or any agreement or understanding, formal or informal, relating to relevant securities which may be an inducement to deal or refrain from dealing entered into by the person making the disclosure and any party to the offer or any person acting in concert with a party to the offer:
    Irrevocable commitments and letters of intent should not be included. If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (b)        Agreements, arrangements or understandings relating to options or derivatives

    Details of any agreement, arrangement or understanding, formal or informal, between the person making the disclosure and any other person relating to:
    (i)   the voting rights of any relevant securities under any option; or
    (ii)   the voting rights or future acquisition or disposal of any relevant securities to which any derivative is referenced:
    If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (c)        Attachments

    Is a Supplemental Form 8 (Open Positions) attached? NO
    Date of disclosure: 10 OCTOBER 2024
    Contact name: MARK ELLIOTT
    Telephone number: 01253 376539

    Public disclosures under Rule 8 of the Code must be made to a Regulatory Information Service.

    The Panel’s Market Surveillance Unit is available for consultation in relation to the Code’s disclosure requirements on +44 (0)20 7638 0129.

    The Code can be viewed on the Panel’s website at http://www.thetakeoverpanel.org.uk.

    The MIL Network

  • MIL-OSI: Capgemini’s World Energy Markets Observatory annual report 2024: The Paris Agreement’s goals are no longer achievable, but net zero is still in sight with accelerated efforts

    Source: GlobeNewswire (MIL-OSI)

    Press contact:
    Florence Lievre
    Tel.: +33 1 47 54 50 71
    Email: florence.lievre@capgemini.com

    Capgemini’s World Energy Markets Observatory annual report 2024:
    The Paris Agreement’s goals are no longer achievable, but net zero is still in sight with accelerated efforts

    • Despite impressive strides in 2023 and positive projections for 2024, the pace of renewable development isn’t fast enough
    • The critical role of nuclear energy to addressing increased clean energy demands is now recognized, but construction of new large power plants takes time and industrialization of Small Modular Reactors (SMRs) is proving complex
    • Addressing the complexity of energy transition challenges will require new market mechanisms encouraging further innovation, choosing appropriate measures, and accelerated public and private investment in low carbon technologies and the power grid

    Paris, October 10, 2024 – Capgemini has published the 26thedition of its annual World Energy Markets Observatory (WEMO), created in partnership with Hogan Lovells, Vaasa ETT and Enerdata. The report takes stock of the current state of the energy transition. Despite progress being made, greenhouse gas (GHG) emissions are continuing to increase, reaching a new record high of 37.4 billion tonnes (Gt) in 20231, confirming that the path to the reach Paris Agreement’s objectives is not on track. The report provides insights on what the key focus areas would need to be, moving forward, to address the complex energy transition challenges, including a change in the measurement of clean energy progress, as well as accelerated investment in the power grid and clean technologies.

    James Forrest, Global Energy Transition & Utilities Industry Leader at Capgemini says: “Despite an historical spike in renewable penetration, the pace of development isn’t fast enough to close the gap. There is still much to do in the next decade to get closer to net zero by 2050 and achieve a successful energy transition: whether it be in the field of low carbon technologies, R&D efforts, nuclear or grid flexibility and storage. In addition, beyond the necessary adoption of new market mechanisms, a shift away from measuring energy based on primary consumption is needed. This measurement was relevant during past energy crises, but it is now time to adopt a more holistic approach. Moving to a final energy demand measurement would better assess clean energy progress and ensure more accurate projections.”

    Key observations from the 2024 report include:

    • There is a need to hasten the deployment of renewable energy globally, and to accelerate in developing countries, to deliver the 2030 and 2050 decarbonization goals. The total amount of final energy provided by renewable energy is likely to be limited to about 40% of global needs. In 2023, total renewable energy capacity increased by 14% year on year with a larger capacity expansion of solar (32%) than wind (13%). But, whilst 2024 is promising to hit another record, as this was the case for the 22nd previous years, this growth is far below what is needed to achieve net zero carbon in 2050. Moreover, while the renewable penetration rate increases, they are impacting grid stability and association with stationary batteries will become compulsory. According to the report, storable renewable energies development, such as biomass or geothermal energy, should be accelerated.
    • Hydrogen is now a strategic lever in the decarbonization path. The number of projects reaching final investment decision has quadrupled over the last two years. However, a refocus of applications has been observed due to the increasing costs of low-carbon hydrogen production, competition between uses, and regulations. Only certain uses in ‘Hard to Abate’ industries, such as heavy industry and maritime mobility, have strong potential.
    • Global nuclear capacity needs to triple to ensure stable, low-carbon power. COP28 has recognized the critical role of nuclear energy for reducing the effects of climate change. While there is some promising progress in nuclear renaissance, including Small Modular Reactors (SMRs), development of new nuclear power plants is still difficult. In 2023, 440 nuclear reactors (390 GW) provided 9% of the world’s electricity, 25% of the world’s low-carbon electricity. SMRs are in the planning or early construction stages with many years before they are deployed at scale as their industrialization can prove to be complex. According to the report, more focus needs to be placed on extending the life of existing nuclear plants.
    • The power grid plays a fundamental role to accelerate clean energy transitions. Grid investment is starting to pick up and is expected to reach USD 400 billion in 20242, with Europe, the United Sates, China and parts of Latin America leading the way. According to the report, better forecasting electricity consumption and finer optimization scenarios thanks to technologies such as AI will help to improve grid balancing.
    • Whilst AI has the potential to significantly accelerate decarbonization, a lack of skills and a focus on short-term proof of concepts is hampering adoption to date. However, AI coupled with GenAI in agentic LLM (Large Language Model) workflows3 has a clear role to play as a catalyst to improve grids efficiency, e-fuel discovery; new battery or wind turbine design; synthetic biology; and augmented insights from many data sources for better informed decision making.
      • Protectionist approaches to increasing energy sovereignty may have undesirable implications. Ongoing geopolitical uncertainties are affecting energy markets and systems. To ensure security of supply, the use of embargoes, tariffs and subsidies in almost all jurisdictions is distorting energy markets and threatens efficient allocation of capital. According to the report, embargoes are proving ineffective, and decreasing the transparency and traceability of energy supplies, which is essential to tracking decarbonization efforts. Denying access to the cheapest sources of energy equipment and energy supplies drives up prices for consumers and reduces funding available for the energy transition.
      • According to the report, ‘Primary Energy Demand’ is an outdated concept for energy transition. There is a need to move from primary to final energy consumption measurement (in kWh) to ensure accurate projections, and clean energy progress. Measuring energy based on primary consumption ignores that: for the same end-energy services, new electric services are generally more efficient; a lot of fossil fuels are wasted in the generation of electricity; energy is also wasted on finding and processing fossil fuels.

    The World Energy Markets Observatory (WEMO) is Capgemini’s annual thought leadership and research report created in partnership with Hogan Lovells, Vaasa ETT and Enerdata, that tracks the transformation of global energy markets, including Europe, North America, Australia, Southeast Asia, India, and China. Now in its 26th edition, the report has been prepared by a global team of over 100 experts, and includes 15 articles, all backed with rigorous analysis. The report begins with a global outlook, then covers the topics pivotal to the energy transition including geopolitical impacts, demand side energy transition, batteries, renewables, SMRs, Hydrogen, Industrial Heat, GenAI and the Inflation Reduction Act (IRA).
    For more information and to get access to the report, click here

    About Capgemini
    Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.
    Get The Future You Want | http://www.capgemini.com


    1 Source: IEA- CO2 Emissions in 2023
    2 Source IEA: Electricity Grids and Secure Energy Transitions

    3 GenAI in agentic LLM (Large Language Model): iterative and collaborative model that transforms the interaction with LLMs into a series of manageable, refinable steps.

    Attachments

    The MIL Network

  • MIL-OSI: Sacks Parente Golf Inc. Announces Closing of $732,000 Underwritten Public Offering of Shares of Common Stock

    Source: GlobeNewswire (MIL-OSI)

    CAMARILLO, CA, Oct. 10, 2024 (GLOBE NEWSWIRE) — Sacks Parente Golf, Inc. (Nasdaq: SPGC) (“SPG” or the “Company”), a technology-forward golf company with a growing portfolio of golf products, including putters, golf shafts, golf grips, and other golf-related accessories, announced the closing of its underwritten public offering (the “Offering”) of 366,000 shares of Common Stock for aggregate gross proceeds of approximately $732,000, prior to deducting underwriting discounts and other offering expenses.

    The Company intends to use the net proceeds from this Offering for general corporate and working capital needs.

    The transaction closed on October 10, 2024.

    In addition, the Company has granted Aegis Capital Corp. a 45-day option to purchase additional shares of common stock of up to 15% of the number of shares of common stock sold in the Offering solely to cover over-allotments, if any. If this option is exercised in full, the total gross proceeds of the offering including over-allotments are expected to be approximately $842,000 before deducting underwriting discounts, commissions and offering expenses, which amount would essentially exhaust the maximum amount the Company can currently raise under its shelf registration statement.

    Aegis Capital Corp. acted as the sole book-running manager for the Offering. TroyGould PC acted as counsel to the Company. Kaufman & Canoles, P.C. acted as counsel to Aegis Capital Corp.

    The Offering was made pursuant to an effective registration statement on Form S-3 (No. 333-281664) previously filed with the U.S. Securities and Exchange Commission (SEC) and declared effective by the SEC on September 23, 2024. A preliminary prospectus (the “Preliminary Prospectus”) describing the terms of the proposed offering was filed with the SEC and is available on the SEC’s website located at http://www.sec.gov. Electronic copies of the Preliminary Prospectus may be obtained by contacting Aegis Capital Corp., Attention: Syndicate Department, 1345 Avenue of the Americas, 27th floor, New York, NY 10105, by email at syndicate@aegiscap.com, or by telephone at (212) 813-1010. Before investing in this Offering, interested parties should read in their entirety the registration statement and the Preliminary Prospectus and the other documents that the Company has filed with the SEC that are incorporated by reference in such registration statement and the Preliminary Prospectus, which provide more information about the Company and the Offering.

    This press release shall not constitute an offer to sell or the solicitation of an offer to buy nor shall there be any sale of these securities in any state or jurisdiction in which such offer, solicitation or sale would be unlawful prior to registration or qualification under the securities laws of any such state or jurisdiction.

    About Sacks Parente Golf, Inc.

    Sacks Parente Golf, Inc. is a technology-forward golf company that help golfers elevate their game. With a growing portfolio of golf products, including putters, golf shafts, golf grips, and other golf-related accessories, the Company’s innovative accomplishments include: the First Vernier Acuity putter, patented Ultra-Low Balance Point (ULBP) putter technology, weight-forward Center-of-Gravity (CG) design, and pioneering ultra-light carbon fiber putter shafts.

    Forward-Looking Statements

    The foregoing material may contain “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934, each as amended. Forward-looking statements include all statements that do not relate solely to historical or current facts, including without limitation statements regarding the Company’s product development and business prospects, and can be identified by the use of words such as “may,” “will,” “expect,” “project,” “estimate,” “anticipate,” “plan,” “believe,” “potential,” “should,” “continue” or the negative versions of those words or other comparable words. Forward-looking statements are not guarantees of future actions or performance. These forward-looking statements are based on information currently available to the Company and its current plans or expectations and are subject to a number of risks and uncertainties that could significantly affect current plans. Should one or more of these risks or uncertainties materialize, or the underlying assumptions prove incorrect, actual results may differ significantly from those anticipated, believed, estimated, expected, intended, or planned. Although the Company believes that the expectations reflected in the forward-looking statements are reasonable, the Company cannot guarantee future results, performance, or achievements. Except as required by applicable law, including the security laws of the United States, the Company does not intend to update any of the forward-looking statements to conform these statements to actual results.

    Investor Contact for Sacks Parente Golf, Inc.:
    Tel: (855) 774-7888, Option 8
    investors@sacksparente.com

    The MIL Network

  • MIL-OSI: TSplus Wraps Up Another Successful Quarter with Major Developments and New Product Enhancements

    Source: GlobeNewswire (MIL-OSI)

    LYON, France, Oct. 10, 2024 (GLOBE NEWSWIRE) — TSplus recently held its quarterly meeting in Lyon, where the entire headquarters gathered to celebrate milestones, strategize for the future, and share some exciting product updates. The company, known for its innovation and affordable alternatives to Citrix, is on track to expand its presence globally and further strengthen its offerings.

    Dominique Benoit, CEO of TSplus, opened the meeting by highlighting the company’s rapid growth, with about 600,000 clients and 8,000 resellers across the world. He emphasized TSplus’ position as the “French Citrix-Killer,” with upcoming subscription models for TSplus Remote Access poised to capture more market shares.

    We’re building towards an exciting future,” Dominique said. “By 2030, we aim to grow from 80 employees to 500, and we’re already laying the groundwork with new strategic developments.”

    Powering the Future of Remote Access

    This quarter has seen remarkable growth, with invoice numbers doubling and a projected 15% revenue increase by year-end. The company’s flagship product, Remote Access Enterprise, has emerged as a best-seller, and key markets like India, France, and the USA are hosting the largest customers. Additionally, TSplus is proud to announce the official launch of TSplus China, located near Shanghai, marking an important milestone in expanding its presence in the Asia-Pacific region.

    Advanced Security, Remote Access and Beyond

    The Development Team has been hard at work, with the upcoming release of Advanced Security 7.1 taking center stage. This release, still in beta version, will introduce a completely revamped user interface, providing a smoother and more intuitive experience. New features will be included too, to increase risk awareness and protection performance.

    In other product news, Remote Access has seen over 30 updates and 40 fixes, such as improvements to the Universal Printer and a sleek new Web Portal. Remote Support now boasts 2FA protection, cross-platform compatibility over macOS and Windows devices, and a soon-to-be-released Android app.

    Leader To Be in Secure Remote Access Solutions

    TSplus has also focused on enhancing its digital presence, with a complete redesign of TSplus.net. The revamped website has significantly boosted traffic generating a 20% sales growth. Meanwhile, the Licensing Portal has been simplified, making it easier for resellers to navigate.

    As AI continues to shape the marketing landscape and Google ranking algorithm, TSplus is staying ahead to create high-quality videos, blog posts, and website enhancements, further expanding the company’s visibility everywhere on the Web.

    With ambitious plans on the horizon, TSplus is set to roll out additional updates, including a full overhaul of all their showcase websites. These developments will further solidify TSplus’ position as a global leader in secure remote access solutions.

    Try any TSplus software for free today with a 15-day trial by visiting http://www.tsplus.net.

    Media Contact:
    Floriane Mer
    Marketing Manager at TSplus
    floriane.mer@tsplus.net

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/03affb29-cb02-4102-8792-863ea0b86f83

    The MIL Network

  • MIL-OSI: Form 8.3 – [ECKOH PLC – 09 10 2024] – (CGWL)

    Source: GlobeNewswire (MIL-OSI)

    FORM 8.3

    PUBLIC OPENING POSITION DISCLOSURE/DEALING DISCLOSURE BY
    A PERSON WITH INTERESTS IN RELEVANT SECURITIES REPRESENTING 1% OR MORE
    Rule 8.3 of the Takeover Code (the “Code”)

    1.        KEY INFORMATION

    (a)   Full name of discloser: CANACCORD GENUITY WEALTH LIMITED (for Discretionary clients)
    (b)   Owner or controller of interests and short positions disclosed, if different from 1(a):
            The naming of nominee or vehicle companies is insufficient. For a trust, the trustee(s), settlor and beneficiaries must be named.
    N/A
    (c)   Name of offeror/offeree in relation to whose relevant securities this form relates:
            Use a separate form for each offeror/offeree
    ECKOH PLC
    (d)   If an exempt fund manager connected with an offeror/offeree, state this and specify identity of offeror/offeree: N/A
    (e)   Date position held/dealing undertaken:
            For an opening position disclosure, state the latest practicable date prior to the disclosure
    09 OCTOBER 2024
    (f)   In addition to the company in 1(c) above, is the discloser making disclosures in respect of any other party to the offer?
            If it is a cash offer or possible cash offer, state “N/A”
    N/A

    2.        POSITIONS OF THE PERSON MAKING THE DISCLOSURE

    If there are positions or rights to subscribe to disclose in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 2(a) or (b) (as appropriate) for each additional class of relevant security.

    (a)      Interests and short positions in the relevant securities of the offeror or offeree to which the disclosure relates following the dealing (if any)

    Class of relevant security: 10p ORDINARY
      Interests Short positions
    Number % Number %
    (1)   Relevant securities owned and/or controlled: 20,954,086 7.2114    
    (2)   Cash-settled derivatives:        
    (3)   Stock-settled derivatives (including options) and agreements to purchase/sell:        
    TOTAL: 20,954,086 7.2114    

    All interests and all short positions should be disclosed.

    Details of any open stock-settled derivative positions (including traded options), or agreements to purchase or sell relevant securities, should be given on a Supplemental Form 8 (Open Positions).

    (b)      Rights to subscribe for new securities (including directors’ and other employee options)

    Class of relevant security in relation to which subscription right exists:  
    Details, including nature of the rights concerned and relevant percentages:  

    3.        DEALINGS (IF ANY) BY THE PERSON MAKING THE DISCLOSURE

    Where there have been dealings in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 3(a), (b), (c) or (d) (as appropriate) for each additional class of relevant security dealt in.

    The currency of all prices and other monetary amounts should be stated.

    (a)        Purchases and sales

    Class of relevant security Purchase/sale Number of securities Price per unit
    10p ORDINARY SALE 34,325 41.76p
    10p ORDINARY SALE 30,425 41.81p

    (b)        Cash-settled derivative transactions

    Class of relevant security Product description
    e.g. CFD
    Nature of dealing
    e.g. opening/closing a long/short position, increasing/reducing a long/short position
    Number of reference securities Price per unit
    NONE        

    (c)        Stock-settled derivative transactions (including options)

    (i)        Writing, selling, purchasing or varying

    Class of relevant security Product description e.g. call option Writing, purchasing, selling, varying etc. Number of securities to which option relates Exercise price per unit Type
    e.g. American, European etc.
    Expiry date Option money paid/ received per unit
    NONE              

    (ii)        Exercise

    Class of relevant security Product description
    e.g. call option
    Exercising/ exercised against Number of securities Exercise price per unit

    (d)        Other dealings (including subscribing for new securities)

    Class of relevant security Nature of dealing
    e.g. subscription, conversion
    Details Price per unit (if applicable)
    NONE      

    4.        OTHER INFORMATION

    (a)        Indemnity and other dealing arrangements

    Details of any indemnity or option arrangement, or any agreement or understanding, formal or informal, relating to relevant securities which may be an inducement to deal or refrain from dealing entered into by the person making the disclosure and any party to the offer or any person acting in concert with a party to the offer:
    Irrevocable commitments and letters of intent should not be included. If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (b)        Agreements, arrangements or understandings relating to options or derivatives

    Details of any agreement, arrangement or understanding, formal or informal, between the person making the disclosure and any other person relating to:
    (i)   the voting rights of any relevant securities under any option; or
    (ii)   the voting rights or future acquisition or disposal of any relevant securities to which any derivative is referenced:
    If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (c)        Attachments

    Is a Supplemental Form 8 (Open Positions) attached? NO
    Date of disclosure: 10 OCTOBER 2024
    Contact name: MARK ELLIOTT
    Telephone number: 01253 376539

    Public disclosures under Rule 8 of the Code must be made to a Regulatory Information Service.

    The Panel’s Market Surveillance Unit is available for consultation in relation to the Code’s disclosure requirements on +44 (0)20 7638 0129.

    The Code can be viewed on the Panel’s website at http://www.thetakeoverpanel.org.uk.

    The MIL Network

  • MIL-OSI: Form 8.3 – [KEYWORDS STUDIOS PLC] – 09 10 2024 – (CGWL)

    Source: GlobeNewswire (MIL-OSI)

    FORM 8.3

    PUBLIC OPENING POSITION DISCLOSURE/DEALING DISCLOSURE BY
    A PERSON WITH INTERESTS IN RELEVANT SECURITIES REPRESENTING 1% OR MORE
    Rule 8.3 of the Takeover Code (the “Code”)

    1.        KEY INFORMATION

    (a)   Full name of discloser: CANACCORD GENUITY WEALTH LIMITED (for Discretionary clients)
    (b)   Owner or controller of interests and short positions disclosed, if different from 1(a):
            The naming of nominee or vehicle companies is insufficient. For a trust, the trustee(s), settlor and beneficiaries must be named.
    N/A
    (c)   Name of offeror/offeree in relation to whose relevant securities this form relates:
            Use a separate form for each offeror/offeree
    KEYWORDS STUDIOS PLC
    (d)   If an exempt fund manager connected with an offeror/offeree, state this and specify identity of offeror/offeree: N/A
    (e)   Date position held/dealing undertaken:
            For an opening position disclosure, state the latest practicable date prior to the disclosure
    09 OCTOBER 2024
    (f)   In addition to the company in 1(c) above, is the discloser making disclosures in respect of any other party to the offer?
            If it is a cash offer or possible cash offer, state “N/A”
    N/A

    2.        POSITIONS OF THE PERSON MAKING THE DISCLOSURE

    If there are positions or rights to subscribe to disclose in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 2(a) or (b) (as appropriate) for each additional class of relevant security.

    (a)      Interests and short positions in the relevant securities of the offeror or offeree to which the disclosure relates following the dealing (if any)

    Class of relevant security: 1p ORDINARY
      Interests Short positions
    Number % Number %
    (1)   Relevant securities owned and/or controlled: 1,368,324 1.6991    
    (2)   Cash-settled derivatives:        
    (3)   Stock-settled derivatives (including options) and agreements to purchase/sell:        
    TOTAL: 1,368,324 1.6991    

    All interests and all short positions should be disclosed.

    Details of any open stock-settled derivative positions (including traded options), or agreements to purchase or sell relevant securities, should be given on a Supplemental Form 8 (Open Positions).

    (b)      Rights to subscribe for new securities (including directors’ and other employee options)

    Class of relevant security in relation to which subscription right exists:  
    Details, including nature of the rights concerned and relevant percentages:  

    3.        DEALINGS (IF ANY) BY THE PERSON MAKING THE DISCLOSURE

    Where there have been dealings in more than one class of relevant securities of the offeror or offeree named in 1(c), copy table 3(a), (b), (c) or (d) (as appropriate) for each additional class of relevant security dealt in.

    The currency of all prices and other monetary amounts should be stated.

    (a)        Purchases and sales

    Class of relevant security Purchase/sale Number of securities Price per unit
    1p ORDINARY SALE 1,110 2438.2p

    (b)        Cash-settled derivative transactions

    Class of relevant security Product description
    e.g. CFD
    Nature of dealing
    e.g. opening/closing a long/short position, increasing/reducing a long/short position
    Number of reference securities Price per unit
    NONE        

    (c)        Stock-settled derivative transactions (including options)

    (i)        Writing, selling, purchasing or varying

    Class of relevant security Product description e.g. call option Writing, purchasing, selling, varying etc. Number of securities to which option relates Exercise price per unit Type
    e.g. American, European etc.
    Expiry date Option money paid/ received per unit
    NONE              

    (ii)        Exercise

    Class of relevant security Product description
    e.g. call option
    Exercising/ exercised against Number of securities Exercise price per unit

    (d)        Other dealings (including subscribing for new securities)

    Class of relevant security Nature of dealing
    e.g. subscription, conversion
    Details Price per unit (if applicable)
    NONE      

    4.        OTHER INFORMATION

    (a)        Indemnity and other dealing arrangements

    Details of any indemnity or option arrangement, or any agreement or understanding, formal or informal, relating to relevant securities which may be an inducement to deal or refrain from dealing entered into by the person making the disclosure and any party to the offer or any person acting in concert with a party to the offer:
    Irrevocable commitments and letters of intent should not be included. If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (b)        Agreements, arrangements or understandings relating to options or derivatives

    Details of any agreement, arrangement or understanding, formal or informal, between the person making the disclosure and any other person relating to:
    (i)   the voting rights of any relevant securities under any option; or
    (ii)   the voting rights or future acquisition or disposal of any relevant securities to which any derivative is referenced:
    If there are no such agreements, arrangements or understandings, state “none”

    NONE

    (c)        Attachments

    Is a Supplemental Form 8 (Open Positions) attached? NO
    Date of disclosure: 10 OCTOBER 2024
    Contact name: MARK ELLIOTT
    Telephone number: 01253 376539

    Public disclosures under Rule 8 of the Code must be made to a Regulatory Information Service.

    The Panel’s Market Surveillance Unit is available for consultation in relation to the Code’s disclosure requirements on +44 (0)20 7638 0129.

    The Code can be viewed on the Panel’s website at http://www.thetakeoverpanel.org.uk.

    The MIL Network

  • MIL-OSI: Phunware to Participate in Webull Virtual Corporate Connect Webinar

    Source: GlobeNewswire (MIL-OSI)

    AUSTIN, Texas, Oct. 10, 2024 (GLOBE NEWSWIRE) — Phunware, Inc. (“Phunware” or the “Company”) (NASDAQ: PHUN), a leader in enterprise cloud solutions for mobile applications, announces that it will participate in the Webull LIVE! with Corporate Connect: Technology Investment Webinar on Wednesday, October 16, 2024 at 2pm ET. Phunware CEO Mike Snavely will present an overview of the Company and its strategic path forward and then participate in a Q&A session with investors.

       
    Presentation Date: October 16, 2024
    Time: 2pm ET
    Webinar Link: Click here to register
       

    About Phunware
    Leading hospitality brands partner with Phunware to delight their guests with personalized mobile experiences. Phunware’s mobile applications and SDKs leverage patented wayfinding and contextual engagement to guide guests to the right experience at the right time. Hotels, resorts, casinos, and convention centers can integrate their most important business systems to unify the guest journey, boost loyalty, and drive new revenue across their properties.

    https://www.phunware.com/solutions/hospitality/

    Safe Harbor
    This press release includes forward-looking statements. All statements other than statements of historical facts contained in this press release, including statements regarding our future results of operations and financial position, business strategy and plans, and our objectives for future operations, are forward-looking statements. The words “anticipate,” “believe,” “continue,” “could,” “estimate,” “expect,” “expose,” “intend,” “may,” “might,” “opportunity,” “plan,” “possible,” “potential,” “predict,” “project,” “should,” “will,” “would” and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements, but the absence of these words does not mean that a statement is not forward-looking. For example, Phunware is using forward-looking statements when it discusses the proposed offering and the timing and terms of such offering and its intended use of proceeds from such offering should it occur. 

    The forward-looking statements contained in this press release are based on our current expectations and beliefs concerning future developments and their potential effects on us. Future developments affecting us may not be those that we have anticipated. These forward-looking statements involve a number of risks, uncertainties (some of which are beyond our control) and other assumptions that may cause actual results or performance to be materially different from those expressed or implied by these forward-looking statements. These risks and uncertainties include, but are not limited to, those factors described under the heading “Risk Factors” in our filings with the SEC, including our reports on Forms 10-K, 10-Q, 8-K and other filings that we make with the SEC from time to time. Should one or more of these risks or uncertainties materialize, or should any of our assumptions prove incorrect, actual results may vary in material respects from those projected in these forward-looking statements. We undertake no obligation to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, except as may be required under applicable securities laws. These risks and others described under “Risk Factors” in our SEC filings may not be exhaustive.

    Phunware Media Inquiries: 
    MZ Group, North America
    Joe McGurk, Managing Director
    917-259-6895
    PHUN@mzgroup.us

    Phunware Investor Relations: 
    CORE IR
    516-222-2560
    investorrelations@phunware.com

    The MIL Network

  • MIL-OSI: Blockgraph Announces Release of Blockgraph OnDemand, a Self-Serve First-Party Data Onboarding Solution

    Source: GlobeNewswire (MIL-OSI)

    NEW YORK, Oct. 10, 2024 (GLOBE NEWSWIRE) — Blockgraph, a leading privacy-centric identity and data collaboration platform designed to fuel the future of connected TV advertising, today announced the launch of Blockgraph OnDemand, a new self-serve data onboarding solution designed so advertisers of any size can leverage their first-party audience and purchase data with participating media seller and ad tech platform partners. The new offering makes it easy for marketers to quickly and securely execute and measure targeted campaigns across connected TV households as well as attribute purchases to ad exposures. The cost-effective solution offers simple sign up with no upfront fees, ultimately facilitating easy onboarding of first-party data with no intermediaries, delivering efficient, privacy-focused audience-based targeting, measurement, and attribution.

    Due to ongoing signal loss in digital advertising, marketers must find new technology solutions to harness their first-party data in order to effectively target and engage their most relevant audiences. CTV has been an increasingly favored solution for many large and enterprise advertisers, allowing them to use first-party data to connect viewers on multiple screens and devices. However, these targeting methods have until now been difficult for smaller and mid-sized advertisers to execute due to cost and technology limitations.

    “First-party data is an essential asset for marketers seeking to improve campaign performance, particularly those who have relied on cookies and other third-party data sources,” said Jason Manningham, CEO of Blockgraph. “Our new self-serve data onboarding solution democratizes access to first-party data targeting and measurement, ensuring that smart, forward-thinking marketers can run successful campaigns with participating supply and platform partners. With this launch, we are looking forward to taking our business from our first 50 to our next 500 customers by focusing on an underserved and valuable segment of marketers.”

    Blockgraph OnDemand allows advertisers to sign up for free online and securely upload their first-party data through a user-friendly interface. Audience and purchase data can be deterministically matched to select media and platform partners in a privacy-centric manner, enabling relevant ads to be planned, delivered, and measured on a household level. As a result, advertisers using Blockgraph OnDemand can take advantage of a one-step process to execute linear and streaming CTV campaigns as easily as walled garden buys, while also being able to transparently assess performance.

    “We share Blockgraph’s commitment to providing local small and medium-sized businesses with easier access to the tools they need to optimize the outcomes of their advertising campaigns,” said Rob Klippel, Senior Vice President of Product, Technology & Operations at Spectrum Reach, the advertising sales business for Charter Communications, Inc. “The ability to target and measure multiscreen TV campaigns is vital for local advertisers; with Blockgraph OnDemand these capabilities will be expanded and become more accessible to a broader customer base.”

    “Self-serve platforms like Blockgraph OnDemand are important advancements for marketers – empowering agencies like PMG and our customers to confidently leverage their first-party data for converged TV campaigns with precision, control, and ease,” said Mike Treon, Head of CTV and Video at PMG. “By making it easier to match and activate data securely and transparently, Blockgraph OnDemand is removing barriers, so brands of all sizes can scale their CTV strategies in premium environments, directly and flexibly with media partners and their addressable user bases, which is critical for both operational ease and campaign success.”

    About Blockgraph
    Blockgraph is a leading privacy-centric identity and data collaboration platform designed to fuel the future of connected TV advertising. The world’s leading media, technology, and information services companies collaborate with trusted partners using Blockgraph’s privacy-focused platform to create and implement identity-based targeting and measurement solutions for multiscreen advertising. Blockgraph is owned by Charter Communications Inc., Comcast NBCUniversal, and Paramount. For more information, please visit Blockgraph at http://www.blockgraph.co.

    Contact:
    Alexandra Levy
    650-996-5758
    alex@siliconalley-media.com

    The MIL Network

  • MIL-OSI: Cloudera Expands Industry-Leading Enterprise AI Ecosystem with New Partners

    Source: GlobeNewswire (MIL-OSI)

    New partners Anthropic, Google Cloud, and Snowflake join Cloudera’s AI Ecosystem at EVOLVE24 New York event

    Ecosystem of technology providers makes it easier, more economical, and safer for enterprises to maximize the value of AI initiatives

    SANTA CLARA, Calif. and NEW YORK, Oct. 10, 2024 (GLOBE NEWSWIRE) — Cloudera, a hybrid platform for data, analytics, and AI, today announced the expansion of its Enterprise AI Ecosystem during its annual data and AI conference, EVOLVE24 New York. This initiative brings together a diverse group of industry-leading AI providers to deliver comprehensive, end-to-end AI solutions for customers that help to maximize the value of AI.

    Large enterprises have special requirements for running AI applications at scale, including:

    • Demonstrating business value that justifies the total cost of ownership within a reasonable timeframe.
    • Adhering to strict security and privacy standards to protect sensitive data and maintain compliance.
    • Maintaining the flexibility to deploy a diverse range of models from a broad selection of vendors in the optimal environment for each use case – where the supporting data often resides.

    At last year’s EVOLVE conference, Cloudera launched the Enterprise AI Ecosystem, with these founding members:

    • NVIDIA who provides full-stack accelerated computing for the development and deployment of AI workloads both in private and public clouds. Cloudera’s recent announcement highlighted the expansion of its Cloudera’s AI Inference Service through the integration of NVIDIA NIM, part of the NVIDIA AI Enterprise software platform, a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing across clouds, data centers and workstations.
    • Amazon Web Services (AWS) with Amazon Bedrock, which allows customers to build and scale generative AI applications with a single API.
    • Pinecone for its leading vector database, which underpins the most common technical AI use cases: Retrieval-Augmented Generation (RAG) and semantic search.

    Over the last year, the Enterprise AI Ecosystem has generated significant inbound interest and a steady flow of requests for Cloudera to build on its existing AI partners and establish new ones. Now Cloudera is proud to introduce its newest set of AI Ecosystem partners at EVOLVE24 New York. They are:

    • Google Cloud: Google Cloud’s Vertex AI Model Garden provides a centralized hub for discovering, customizing, and deploying a diverse range of models. This includes a selection of over 150 first-party, open, and third-party foundation models, including Google’s Gemini, Chirp, Imagen, and more. Google Cloud’s infrastructure also supports Cloudera’s DataHub platform, which serves as the data foundation for building AI applications.

      Additionally, for the first ecosystem collaboration, Cloudera released an Accelerator for Machine Learning Project (AMP) entitled “Summarization with Gemini from Vertex AI” to help customers quickly deploy a summarization use case that takes advantage of the cost effectiveness and performance of Gemini Pro Models accessed from the Vertex AI Model Garden via API.

    • Anthropic: Anthropic’s Claude large language models (LLMs) are ideal for code generation, vision analysis, data insight and text generation use cases. Anthropic’s family of Claude models will allow Cloudera users to balance performance and cost, while their commitment to AI safety research helps to ensure reliable, unbiased, and non-harmful outputs. Cloudera is releasing an AMP entitled Image Analysis with Anthropic’s Claude LLM” that will significantly reduce the time to develop a production image analysis application. Cloudera is also making Claude its default foundational model for its Cloudera AI Coding Co-pilot.
    • Snowflake: Cloudera and Snowflake, the AI Data Cloud company, are building on their strategic collaboration, also announced at EVOLVE24, with Snowflake’s Arctic Embed models, which excel at SQL generation and offer strong cost-performance ratios. Snowflake’s Iceberg-enabled platform provides interoperability with Cloudera, facilitating the sharing of data to feed AI use cases. Cloudera is actively working on product integrations with Snowflake, which can be read about here.

    “We pioneered the Enterprise AI Ecosystem to cater to the complex and continually evolving enterprise-grade security, privacy, authorization, and LLM demands of major organizations; this involves a complete suite of solutions across accelerated compute, semantic querying, vector embeddings, multi-modal agents, RAG applications, fine-tuning, and frontier models,” stated Abhas Ricky, Chief Strategy Officer at Cloudera. “AI researchers and practitioners have since deployed 400+ cutting-edge AI accelerators (AMPs) and numerous agentic applications supporting high-value use cases such as voice of customer analysis, invoice reconciliation, and underwriting automation. Together we are delivering a fully integrated Enterprise AI platform, built on leading models and knowledge bases, to further production-ready high fidelity solutions delivered with experts by your side.”

    “OCBC has delivered dozens of Gen AI applications into production leveraging Cloudera AI and technologies from The Enterprise AI Ecosystem members,” said Adrien Chenailler, Head of Data Science and AI at OCBC Bank. “Our call center transcription application transcribes thousands of hours of calls daily and has led to a significant reduction in average call handling time. We have reduced the investment in research time of our Relationship Managers with GenAI. We’re delighted that Cloudera continues to expand their Enterprise AI Ecosystem because it delivers proven solution architectures that get us from prototype to production faster.”

    “Our partnership with Cloudera helps organizations extract hidden value in their enterprise data, including complex sources like images,” said Kate Jensen, Head of Growth and Revenue at Anthropic. “The new Image Analysis capability turns visual data from images, charts or graphics into actionable insights, while Claude as the default model for Cloudera AI Coding Assistant, and potential other use cases such as Text to SQL and NLP Co-pilots provides customers with a powerful AI assistant that boosts productivity and uncovers new opportunities in their data. Together, we’re transforming raw data into actionable intelligence, empowering businesses to make smarter decisions faster.”

    “We are thrilled to work with Cloudera to integrate Snowflake’s Arctic Embed models into Cloudera AI Inference powered by NVIDIA’s NIM,” said Baris Gultekin, Head of AI, Snowflake. “This collaboration will empower our joint customers to unlock the full potential of generative AI at scale, driving faster insights, enhanced decision-making, and transformative business outcomes. Together, Snowflake and Cloudera are pushing the boundaries of what’s possible with modern data platforms, providing businesses with the agility and intelligence they need to stay ahead in an increasingly AI-driven world.”

    Cloudera’s existing group of Enterprise AI Ecosystem partners, including NVIDIA and AWS, will also be in the spotlight at EVOLVE24 New York, happening today, October 10.

    Click here to learn more about how Cloudera and its partner ecosystem are making it easier, more economical, and safer for enterprises to maximize the value they get from AI.

    About Cloudera

    Cloudera is a hybrid platform for data, analytics, and AI. With 100x more data under management than other cloud-only vendors, Cloudera empowers global enterprises to transform data of all types, on any public or private cloud, into valuable, trusted insights. Our open data lakehouse delivers scalable and secure data management with portable cloud-native analytics, enabling customers to bring GenAI models to their data while maintaining privacy and ensuring responsible, reliable AI deployments. The world’s largest brands in financial services, insurance, media, manufacturing, and government rely on Cloudera to use their data to solve what was impossible—today and in the future.

    To learn more, visit Cloudera.com and follow us on LinkedIn and X. Cloudera and associated marks are trademarks or registered trademarks of Cloudera, Inc. All other company and product names may be trademarks of their respective owners.

    Contact
    Jess Hohn-Cabana
    cloudera@v2comms.com

    The MIL Network

  • MIL-OSI: American Rebel CEO and American Rebel to be Featured at NHRA FallNationals Pre-Stage Fan Fest October 10 in Waxahachie, Texas

    Source: GlobeNewswire (MIL-OSI)

    CEO Andy Ross to Headline Music Main Stage with American Rebel Light Beer Bus in Attendance

    Nashville, TN, Oct. 10, 2024 (GLOBE NEWSWIRE) — American Rebel Holdings, Inc. (NASDAQ: AREB) (“American Rebel” or the “Company”), America’s Patriotic Brand and the creator of American Rebel Beer (http://www.americanrebelbeer.com), and branded safes, personal security and self-defense products and apparel, today announced the company will be featured at the National Hot Rod Association (“NHRA”) FallNationals Pre-Stage Fan Fest on October 10, 2024 taking place at Railyard Park in Waxahachie, Texas.

    Andy Ross, CEO of American Rebel, is the music headliner at the Pre-Stage Fan Fest, and the American Rebel Light Beer bus will be in attendance for guests. The free event, which begins with food trucks, live music, and activities for the whole family, takes place from 6-9 p.m. at Railyard Park (455 S. College St. Waxahachie, TX 75165). The company also provided American Rebel Light Beer at the previous night’s Champions Dinner.

    “Every year, the Pre-Stage Fan Fest gets bigger and bigger,” said Christie Meyer Johnson, Texas Motorplex co-owner. “We love having so many drivers spend time with the fans before the race starts. Last year, we added the JEGS Allstars participants, and now, we have one of the largest autograph sessions in all motorsports. This year, we have added Andy Ross to the main stage to rock out for all our fans in attendance.”

    The Pre-Stage Fan Fest (https://www.stampedeofspeed.com/event/thursday-jegs-all-stars) is an opportunity for drivers to spend time with the fans before the race starts, with one of the largest autograph sessions in all motorsports. More than 50 NHRA Mission Foods Drag Racing Series stars, including fan-favorites Justin Ashley, Ron Capps, Antron Brown, Texans Steve Torrence and Erica Enders, and Matt and Angie Smith. Reigning 2023 Texas FallNationals champions Matt Hagan, Erica Enders, and Gaige Herrera will be in attendance, as well as local drivers Buddy Hull and Kebin Kinsley.

    “We are thrilled to help kick off the FallNationals for the NHRA and its racing community with an evening of music, food and drinks,” said Andy Ross, Chief Executive Officer of American Rebel. “Our partnership with the NHRA continues to provide strategic opportunities to get our American Rebel Light Beer brand in front of the perfect patriotic fanbase.”

    The Texas NHRA FallNationals at the Texas Motorplex near Dallas is the 18th race on the NHRA Mission Foods Drag Racing Series’ 20-race schedule, and it is the fourth round in the six-race Countdown to the Championship. Tony Stewart Racing (TSR) drivers Tony Stewart (Top Fuel) and Matt Hagan (Funny Car) are both in the Countdown, with 2024 marking Stewart’s first appearance in the NHRA postseason and Hagan’s 13th.

    “To get a win in Texas for the fifth time would be huge. You just obviously want to keep doing well at tracks that treat you well and they (Texas Motorplex) do a really good job promoting the event. We (Leah Pruett and Hagan) have the Champions Dinner on Wednesday night, the Fan Fest on Thursday night that Andy Ross (American Rebel CEO) is going to be singing at. It’s just going to be a great weekend. We have multiple sponsors that will be there with Johnson’s Horsepowered Garage on the car and Andy Ross. It’s going to be a great thing and if we can pull down the fifth win at Texas Motorplex. I think it would be the icing on the cake.”

    About NHRA FallNationals

    The Countdown to the Championship blazes into Texas for the Stampede of Speed week, capped off with the Texas NHRA FallNationals. The Stampede of Speed is a ten-day festival of music, drag racing and amazing fan experiences leading up to the Texas NHRA FallNationals hosted at the historic Texas Motorplex, located just 35 minutes from Dallas and Fort Worth. For more information visit http://www.nhra.com.

    About American Rebel Holdings, Inc.

    American Rebel Holdings, Inc. (NASDAQ: AREB) has operated primarily as a designer, manufacturer and marketer of branded safes and personal security and self-defense products and has recently transitioned into the beverage industry through the introduction of American Rebel Beer by its wholly-owned subsidiary American Rebel Beverages, LLC. The Company also designs and produces branded apparel and accessories. To learn more, visit http://www.americanrebel.com and http://www.americanrebelbeer.com. For investor information, visit http://www.americanrebel.com/investor-relations.

    Forward-Looking Statements

    This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. American Rebel Holdings, Inc., (NASDAQ: AREB; AREBW) (the “Company,” “American Rebel,” “we,” “our” or “us”) desires to take advantage of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995 and is including this cautionary statement in connection with this safe harbor legislation. The words “forecasts” “believe,” “may,” “estimate,” “continue,” “anticipate,” “intend,” “should,” “plan,” “could,” “target,” “potential,” “is likely,” “expect” and similar expressions, as they relate to us, are intended to identify forward-looking statements. We have based these forward-looking statements primarily on our current expectations and projections about future events and financial trends that we believe may affect our financial condition, results of operations, business strategy, and financial needs. Important factors that could cause actual results to differ from those in the forward-looking statements include continued increase in revenues, actual size of Best Brands, actual sales to be derived from Best Brands, implied or perceived benefits resulting from the Best Brands agreement, actual launch timing and availability of American Rebel Beer in additional markets, our ability to effectively execute our business plan, and the Risk Factors contained within our filings with the SEC, including our Annual Report on Form 10-K for the year ended December 31, 2023. Any forward-looking statement made by us herein speaks only as of the date on which it is made. Factors or events that could cause our actual results to differ may emerge from time to time, and it is not possible for us to predict all of them. We undertake no obligation to publicly update any forward-looking statements, whether as a result of new information, future developments or otherwise, except as may be required by law.

    Company Contact:
    info@americanrebel.com

    James “Todd” Porter
    American Rebel Beverages, LLC
    tporter@americanrebelbeer.com

    Investor Relations:
    Brian Prenoveau
    MZ North America
    +1 (561) 489-5315
    AREB@mzgroup.us

    Attachment

    The MIL Network

  • MIL-OSI: Avetta Recognized for 2024 New Product of the Year by Occupational Health & Safety

    Source: GlobeNewswire (MIL-OSI)

    LEHI, Utah and HOUSTON, Oct. 10, 2024 (GLOBE NEWSWIRE) — Avetta®, the leading provider of supply chain risk management (SCRM) software, has been named the winner of New Product of the Year in the AI category by Occupational Health & Safety for its innovative AskAva™ product. The prestigious award honors noteworthy product development achievements aimed at improving workplace safety.

    Launched earlier this year, AskAva is the industry’s first generative AI-powered risk assistant, accelerating contractor compliance and advancing contractor safety and sustainability.  It is more than just a risk management tool and is proven to reduce workplace incidents, injuries, and fatalities. As part of Avetta’s ongoing commitment to innovation, AskAva adds more capabilities to Avetta’s award-winning Connect platform, which enables global organizations to automate contractor risk management at scale while educating their supply chain vendors about safety best practices.

    “As more and more contractors enter the workforce, it is increasingly important for companies to ensure compliance and safety among all workers,” said Taylor Allis, CPO of Avetta. “AskAva is a one-of-a-kind solution that delivers personalized safety recommendations across the entire supply chain. We are honored to be recognized by Occupational Health & Safety for our efforts to enhance and augment workplace safety.”

    Global organizations can use AskAva to deploy risk assessments to contractors before conducting high-risk work, such as transporting hazardous materials, working around heavy equipment, or working at heights. AskAva’s AI capabilities enable suppliers and clients to quickly identify and add hazards and effective controls to a Job Hazard Analysis (JHA), reducing the time spent researching, reviewing, and documenting potential job hazards. Once on-site, workers enter their prompts, and AskAva generates suggestions on what types of risk practices can be used to avoid an incident.

    Details about the Occupational Health & Safety New Products of the Year Awards and the full list of 2024 winners are available here.

    To learn more about AskAva, visit our website.

    About Avetta

    The Avetta SaaS platform helps clients manage supply chain risk and their suppliers to become more qualified for jobs. For the hiring clients in our network, we offer the world’s largest supply chain risk management network to manage supplier safety, sustainability, worker competency and performance. We perform contractor prequalification and worker competency management across major industries, all over the globe, including construction, energy, facilities, high tech, manufacturing, mining and telecom.

    Media Contact
    avetta@hoffman.com

    The MIL Network

  • MIL-OSI: New Report Reveals: Customer Loyalty at Stake for Financial Institutions Due to Rise in Identity-Based Attacks

    Source: GlobeNewswire (MIL-OSI)

    NEW YORK, Oct. 10, 2024 (GLOBE NEWSWIRE) — HYPR, the Identity Assurance Company, today released its spotlight report “When Trust is Hacked: Customer Identity Security in Finance.” This report sheds light on the persistent threat of credential misuse and authentication vulnerabilities plaguing the financial industry, drawing a direct correlation between the escalating cyber-threat landscape and the growing apprehension among today’s banking customers. The report’s findings underscore the devastating impact of identity-related cyberattacks on customer loyalty, revealing a staggering 80% of respondents would likely abandon their financial institution following a data breach.

    HYPR’s latest report draws on comprehensive insights from two surveys, encompassing both financial service organizations and their customers, with a total of 548 respondents. This robust data set provides a unique and multifaceted perspective on the current state of identity security in the financial sector – revealing that current technologies are simply failing. Alarmingly, within the past year alone, 86% of finance organizations have been targeted by identity-related cyberattacks, with 84% falling victim to identity fraud. Additionally:

    • Financial institutions suffered losses of up to $4.57 million in the last year alone, more than double the $2.19 million reported in 2022 – due to insecure authentication methods.
    • Over three-quarters (77%) faced at least one breach due to credential misuse or authentication weaknesses.
    • Organizations observed a multitude of attacks with phishing attacks leading in prevalence (42%), followed by credential stuffing (29%), identity impersonation (28%), and push notification attacks (27%).

    “The financial sector remains a prime target for cybercriminals, and identity processes remain a major weak point. Institutions must proactively adapt their defenses to outpace evolving threats, or risk eroding customer trust and facing significant financial losses. Inaction is not an option,” said Gehan Dabare, newly appointed HYPR Advisor and leader for IAM at companies such as JPMC, Citi, CVS Health. “Gone are the days of blind trust. Today’s consumers are informed and empowered, demanding transparency, cutting-edge technology, and the peace of mind that comes with knowing their finances are secure.”


    The High Stakes Impact on Customers

    Today’s banking customers are demanding more accountability from their financial institutions, rejecting the unquestioning loyalty of previous generations. The consequences are clear with an overwhelming 80% of customers prepared to switch banks following a data breach. This intolerance for security lapses is even more pronounced among younger customers, with 93% of those under 35 ready to close their accounts. In contrast, more than a quarter of customers aged 45 and older would remain loyal after a breach. These findings emphasize a clear shift in customer priorities across all age groups: security, company values, and technological innovation are now paramount when evaluating banking relationships. Of those surveyed:

    A mere 11% of respondents were aware of breaches affecting their banks, while 63% firmly believed their banks were unscathed, and the remaining quarter were uncertain. This highlights a critical gap in communication from financial institutions during breaches, raising concerns about the effectiveness of their disclosures. In terms of authentication protocols and technology, most respondents (95.5%) are aware of passkeys as an available login technology. Armed with this information, 77% of customers would actively favor a bank offering passkeys over one that doesn’t.

    Yet, despite the growing demand for heightened authentication measures, financial institutions are trailing in their offerings of safer methods. Nearly a quarter (22%) of respondents still repurpose passwords across financial accounts, while close to 90% rely on one-time passwords (SMS, email or voice) and 7% rely solely on a password. This demonstrates the need for modernization in the financial sector’s authentication practices, especially as customers become increasingly aware of and demand stronger security measures.

    “It’s a stark paradox: the financial sector invests heavily in cybersecurity yet remains a prime target. The question isn’t how these attacks happen, but why they persist,” states Bojan Simic, CEO and Co-founder of HYPR. “Our research exposes the dual nature of this challenge: the struggle to implement effective technology amidst rapidly evolving AI-driven threats, and the rising tide of customer expectations demanding both robust security and transparent communication. This is a defining moment for financial institutions to adapt or be left behind.”

    About HYPR
    HYPR, the leader in passwordless identity assurance, delivers the industry’s most comprehensive end-to-end identity security for your workforce and customers. By unifying phishing-resistant passwordless authentication, adaptive risk mitigation, and automated identity verification, HYPR ensures secure and seamless user experiences for everyone.

    Trusted by organizations worldwide, including two of the four largest US banks, leading manufacturers, and critical infrastructure companies, HYPR secures some of the most complex and demanding environments globally.

    Media:
    Fabienne Dawson
    fabienne@hypr.com
    917.374.6860

    A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/215d6253-f76f-4a3d-86cf-139896d58be2

    The MIL Network

  • MIL-OSI: Cloudera Partners with Snowflake to Unleash Hybrid Data Management Integration Powered by Iceberg

    Source: GlobeNewswire (MIL-OSI)

    Unveiled at EVOLVE24, the unified platform will reduce total cost of ownership and provide a single source of truth for all enterprise data

    SANTA CLARA, Calif. and NEW YORK, Oct. 10, 2024 (GLOBE NEWSWIRE) — Cloudera, the only true hybrid platform for data, analytics, and AI, today announced an integration with Snowflake, the AI Data Cloud company, to bring enterprises an open, unified hybrid data lakehouse, powered by Apache Iceberg. Now, enterprises can leverage the combination of Cloudera and Snowflake—two best-of-breed tools for ingestion, processing, and consumption of data—for a single source of truth across all data, analytics, and AI workloads.

    Data is a business’s most powerful asset. It drives informed decision-making, provides a competitive advantage, and reveals opportunities for innovation. A 2022 study revealed that 80% of businesses report higher revenue due to real-time data analytics, and 98% report an increase in positive customer sentiment due to leveraging data. However, to fully harness the power of data, businesses need a single, unified source of truth for storing, managing, and governing all enterprise data, regardless of where it resides.

    Cloudera has extended its Open Data Lakehouse interoperability to Snowflake, allowing joint customers seamless access to Cloudera’s Data Lakehouse via its Apache Iceberg REST Catalog. Customers benefit from an optimized data platform powered by Apache Iceberg, which enables them to ingest, prepare, and process their data with best-in-class tools. Also, Snowflake users can now query data stored on Cloudera’s Ozone, an on-premises AWS S3-compatible object storage solution, directly from Snowflake. Customers now have access to all major form factors from one cohesive collaboration, on-premise, and as a platform-as-a-service (PaaS) and software-as-a-service (SaaS).

    In addition to enabling greater interoperability between the two systems, Cloudera customers will experience the ease of Snowflake’s Business Intelligence engine. The Snowflake engine can access data from Cloudera’s Open Data Lakehouse without requiring data duplication or transfer, reducing complexity, streamlining operations, and maintaining data integrity.

    Moreover, this collaboration leads to a reduction in the total cost of ownership of the integrated stack for enterprises. The elimination of data and metadata silos, rationalization of data pipelines, and streamlining of operational efforts are key factors in this cost reduction. These improvements help deliver analytics and AI use cases at scale more efficiently, further enhancing the value proposition for businesses leveraging both Cloudera and Snowflake. This strategic integration not only optimizes analytic workflows but also provides a robust framework for enterprises to drive innovation and gain competitive advantages in their respective markets.

    Additional benefits of this integration include:

    • Managed Iceberg Tables: Iceberg tables enhance data performance and reliability, allowing joint customers to unlock the full potential of their data through better organization, faster queries, and simplified data management, regardless of where the data is stored.
    • Best-of-Breed Engines: Joint customers benefit from top-tier engines to ingest, prepare, and manage their data, enabling seamless management of both artificial intelligence (AI) and business intelligence workloads.
    • Unified Security and Governance: This integration consolidates data security and governance across the entire data lifecycle. Joint customers can apply consistent security measures, track data origin and movement, and manage metadata within a single platform, on-premises or the cloud.

    “By extending our open data lakehouse capabilities through Apache Iceberg to Snowflake, we’re enabling our customers to not only optimize their data workflows but also unlock new opportunities for innovation, efficiency, and growth,” said Abhas Ricky, Chief Strategy Officer of Cloudera. “This will help customers simplify their data architecture, minimize data pipelines, and reduce total cost of ownership of their data estate while reducing security risks. Together, Snowflake and Cloudera are bringing about the next era of data-driven decision-making for every modern organization.”

    “Apache Iceberg is a leading choice for customers who want open standards for data, and Cloudera has been an integral part of the Iceberg project,” said Tarik Dwiek, Head of Technology Alliances at Snowflake. “Our partnership expands what’s possible for customers who choose to standardize on Iceberg tables. We are excited to break down silos and deliver a unified hybrid data cloud experience with multi-function analytics to all of our customers.”

    “Through this collaboration, customers gain access to a unified, robust data management platform that provides a single source of truth for all of their data, whether in the cloud or on-premises,” said Sanjeev Mohan, analyst at SanjMo. “This enables them to streamline and secure their data operations while efficiently analyzing and extracting insights across the entire data lifecycle – from ingestion to AI and analytics. It’s a strategic move from two industry giants to partner in a way that will deliver immediate value to businesses.”

    In addition, reaffirming our commitment to advancing Iceberg adoption, Cloudera is excited to announce the technical preview of Cloudera Lakehouse Optimizer. This new service autonomously optimizes your Iceberg tables, further reducing costs while significantly enhancing the performance of your Lakehouse. To learn more about this technical preview, click here.

    About Cloudera
    Cloudera is the only true hybrid platform for data, analytics, and AI. With 100x more data under management than other cloud-only vendors, Cloudera empowers global enterprises to transform data of all types, on any public or private cloud, into valuable, trusted insights. Our open data lakehouse delivers scalable and secure data management with portable cloud-native analytics, enabling customers to bring GenAI models to their data while maintaining privacy and ensuring responsible, reliable AI deployments. The world’s largest brands in financial services, insurance, media, manufacturing, and government rely on Cloudera to use their data to solve what seemed impossible—today and in the future.

    To learn more, visit Cloudera.com and follow us on LinkedIn and X. Cloudera and associated marks are trademarks or registered trademarks of Cloudera, Inc. All other company and product names may be trademarks of their respective owners.

    Contact
    Jess Hohn-Cabana
    cloudera@v2comms.com

    The MIL Network

  • MIL-OSI: Nasdaq Rises to 5th in RiskTech100 Global Ranking Following Launch of Financial Technology Division

    Source: GlobeNewswire (MIL-OSI)

    Announcement comes ahead of the first anniversary of Nasdaq’s acquisition of Adenza

    Nasdaq also wins two awards for its financial crime management and regulatory reporting technology

    NEW YORK, Oct. 10, 2024 (GLOBE NEWSWIRE) — Nasdaq (Nasdaq: NDAQ) today announced it has jumped to 5th place in Chartis’ annual RiskTech100® global ranking and has won two awards for its financial crime management and regulatory reporting technology. The news comes less than a year after Nasdaq’s acquisition of Adenza and the establishment of its Financial Technology division. Today, as a scaled platform partner Nasdaq draws on deep industry experience, technology leadership and cloud managed services to help 3,500+ banks, brokers, regulators, central banks, financial infrastructure operators, and buy-side firms solve their most complex operational challenges across risk, compliance, and trade management.

    Chartis’ annual RiskTech100® awards and ranking is widely regarded as the most comprehensive independent study of the world’s major players in risk and compliance technology. In 2023 Nasdaq ranked #18 while Adenza placed #10, with this year’s position reflecting the combined power of its technology offering.

    “This is a remarkable achievement less than one year into the integration,” said Tal Cohen, President of Nasdaq. “The financial services industry faces a series of challenges through increased regulatory scrutiny, ongoing market reforms, and ever more sophisticated financial crime, alongside accelerated technology innovation. Our customers consistently tell us that they value the opportunity to partner with brands that they trust, that are highly regulated themselves and can offer insight and expertise beyond the platforms they provide. We welcome the opportunity to support our clients at such a pivotal moment for the industry, and I’m proud to see our achievements recognized by Chartis.”

    Sid Dash, Chief Researcher at Chartis Research, added: “Nasdaq’s acquisitions, individually and collectively, provide comprehensive coverage of the transaction lifecycle, and are appropriately supported with a strong technology and service framework. Indeed, the breadth of its capabilities has moved it into the top five in the risk technology space.”

    A comprehensive portfolio of mission-critical technology

    Nasdaq’s Capital Markets Technology is deeply embedded into client workflows and serves as the backbone of the capital market operations it underpins, serving as one of the world’s largest market infrastructure technology providers to more than 130 financial market operators globally, including over half of the world’s largest exchanges. In addition, Nasdaq Calypso is a truly global front-to-back trade management, multi-asset class platform – spanning trading, clearing, risk management and post-trade processing – with particular strength in OTC products.

    Nasdaq’s Regulatory Technology solutions play a critical role in protecting trust and integrity across the global financial system, helping clients efficiently and effectively comply with an extensive range of regulatory requirements in an increasingly complex and rapidly evolving environment.

    Nasdaq AxiomSL is a comprehensive regulatory reporting and compliance platform, helping clients comply with requirements across 55 countries and 110 regulators. Nasdaq’s market and trade surveillance technology helps firms detect and prevent market abuse across an extensive network of regulators, exchanges, digital assets marketplaces and market participants. Its cloud-based anti-financial crime technology, Nasdaq Verafin, integrates, resolves, and enriches data from hundreds of data sources and thousands of institutions representing more than $9 trillion in collective assets, to help firms more effectively detect fraud and combat criminal activity.

    With Nasdaq’s technology used by 97% of global systematically important banks, half of the world’s top 25 stock exchanges, 35 central banks and regulatory authorities, it touches a significant portion of the global financial system daily.

    Nasdaq’s ranking also included an assessment of their Nasdaq Boardvantage® board management software, Nasdaq Metrio™ sustainability reporting platform, and Sustainable Lens™ ESG AI Research and Benchmarking solution. More details on the products and services can be found here.

    Nasdaq wins two awards for financial crime and regulatory reporting technology

    Alongside the RiskTech100 ranking, Chartis announced Nasdaq has won two industry awards for Managed Services: Financial Crime and Regulatory Reporting: Markets and Securities.

    The award for Managed Services: Financial Crime recognizes Nasdaq Verafin’s leadership in financial crime management, emphasizing its comprehensive suite of anti-money laundering and fraud detection solutions for a large client base. Its unified platform combines financial crime solutions into one service, with scalable architecture serving a broad range of banks.

    The Regulatory Reporting: Markets and Securities award highlights Nasdaq’s leadership in regulatory reporting through AxiomSL, noting its extensive multi-jurisdictional, multi-market reporting, and expertise in adapting to complex regulatory requirements.

    About Nasdaq

    Nasdaq (Nasdaq: NDAQ) is a leading global technology company serving corporate clients, investment managers, banks, brokers, and exchange operators as they navigate and interact with the global capital markets and the broader financial system. We aspire to deliver world-leading platforms that improve the liquidity, transparency, and integrity of the global economy. Our diverse offering of data, analytics, software, exchange capabilities, and client-centric services enables clients to optimize and execute their business vision with confidence. To learn more about the company, technology solutions, and career opportunities, visit us on LinkedIn, on X @Nasdaq, or at http://www.nasdaq.com.

    Nasdaq Media Contact: 
    Andrew Hughes 
    +44 (0)7443 100896 
    Andrew.Hughes@nasdaq.com  

    -NDAQG-

    Cautionary Note Regarding Forward-Looking Statements:  

    Information set forth in this press release contains forward-looking statements that involve a number of risks and uncertainties. Nasdaq cautions readers that any forward-looking information is not a guarantee of future performance and that actual results could differ materially from those contained in the forward-looking information. Forward-looking statements can be identified by words such as “can” and other words and terms of similar meaning. Such forward-looking statements include, but are not limited to, statements related to the benefits of Nasdaq’s Financial Technology solutions. Forward-looking statements involve a number of risks, uncertainties or other factors beyond Nasdaq’s control. These risks and uncertainties are detailed in Nasdaq’s filings with the U.S. Securities and Exchange Commission, including its annual reports on Form 10-K and quarterly reports on Form 10-Q which are available on Nasdaq’s investor relations website at http://ir.nasdaq.com and the SEC’s website at http://www.sec.gov. Nasdaq undertakes no obligation to publicly update any forward-looking statement, whether as a result of new information, future events or otherwise.  

    The MIL Network

  • MIL-OSI: Leading Analyst Firm Ranks Tenable #1 for Sixth Consecutive Year in Market Share for Device Vulnerability Management

    Source: GlobeNewswire (MIL-OSI)

    COLUMBIA, Md., Oct. 10, 2024 (GLOBE NEWSWIRE) — Tenable®, the exposure management company, today announced that it has been ranked first for 2023 worldwide market share for device vulnerability management in the IDC Worldwide Device Vulnerability Management Market Shares (doc #US51417424, July 2024) report. This is the sixth consecutive year Tenable has been ranked first for market share.

    According to the IDC market share report, Tenable is ranked first in global 2023 market share and revenue. Tenable credits its success to its strategic approach to risk management, which includes a suite of industry-leading exposure management solutions that expose and close security gaps, safeguarding business value, reputation and trust. The Tenable One Exposure Management Platform, the world’s only AI-powered exposure management platform, radically unifies security visibility, insight and action across the modern attack surface – IT, cloud, OT and IoT, web apps and identity systems.

    According to the IDC market share report, “The top 3 device vulnerability management vendors remained the same in 2023 as previous years, with Tenable once again being the top vendor.”

    The report highlighted Tenable’s use of generative AI, noting, “ExposureAI, available as part of the Tenable One platform, provides GenAI-based capabilities that include natural language search queries, attack path and asset exposure summaries, mitigation guidance suggestions, and a bot assistant to ask specific questions about attack path results.”

    Tenable’s latest innovations in the vulnerability management market – Vulnerability Intelligence and Exposure Response – were also highlighted in the report, stating, “Vulnerability Intelligence provides dynamic vulnerability information collected from multiple data sources and vetted by Tenable researchers, while Exposure Response enables security teams to create campaigns based on risk posture trends so remediation progress can be monitored internally.”

    The report also spotlighted the Tenable Assure Partner Program and MDR partnerships, noting, “Tenable has made more of a strategic effort to recruit managed security service providers (SPs) and improve the onboarding experience for them, as well as their customers. Managed detection and response (MDR) providers have been adding proactive exposure management because it helps shrink the customer attack surface, helping them provide better outcomes. Sophos and Coalfire are recently announced partners adding managed exposure management services to their MDR and pen testing services, respectively.”

    “At Tenable, we build products for a cloud-first, platform centric world, meeting customers’ evolving risk management needs,” said Shai Morag, chief product officer, Tenable. “We leverage cutting edge technology, innovating across our portfolio to help customers know, expose and close priority security gaps that put businesses at risk.”

    “The device vulnerability management market is characterized by a focus on broader exposure management, with a number of acquisitions to round out exposure management portfolios,” said Michelle Abraham, senior research director, Security and Trust at IDC. “Vendors are advised to enhance their offerings with additional security signals and automated remediation workflows to stay competitive in this evolving landscape.”

    To read an excerpt of the IDC market share report, visit https://www.tenable.com/analyst-research/idc-worldwide-device-vulnerability-management-market-share-report-2023

    About Tenable
    Tenable® is the exposure management company, exposing and closing the cybersecurity gaps that erode business value, reputation and trust. The company’s AI-powered exposure management platform radically unifies security visibility, insight and action across the attack surface, equipping modern organizations to protect against attacks from IT infrastructure to cloud environments to critical infrastructure and everywhere in between. By protecting enterprises from security exposure, Tenable reduces business risk for more than 44,000 customers around the globe. Learn more at tenable.com.

    Media Contact:
    Tenable
    tenablepr@tenable.com

    The MIL Network

  • MIL-OSI: DTE Energy schedules third quarter 2024 earnings release, conference call

    Source: GlobeNewswire (MIL-OSI)

    Detroit, Oct. 10, 2024 (GLOBE NEWSWIRE) — DTE Energy (NYSE:DTE) will announce its third quarter 2024 earnings before the market opens Thursday, October 24, 2024.

    The company will conduct a conference call to discuss earnings results at 9:00 a.m. ET the same day.

    Investors, the news media and the public may listen to a live internet broadcast of the call at dteenergy.com/investors. The telephone dial-in number in the U.S. and Canada toll free is: (888) 510-2008. The U.S. and international toll telephone dial-in number is: (646) 960-0306 and the Canada dial-in toll is: (289) 514-5035. The passcode is 4987588. The webcast will be archived on the DTE Energy website at dteenergy.com/investors.

    About DTE Energy 

    DTE Energy (NYSE:DTE) is a Detroit-based diversified energy company involved in the development and management of energy-related businesses and services nationwide. Its operating units include an electric company serving 2.3 million customers in Southeast Michigan and a natural gas company serving 1.3 million customers across Michigan. The DTE portfolio also includes energy businesses focused on custom energy solutions, renewable energy generation, and energy marketing and trading. DTE has continued to accelerate its carbon reduction goals to meet aggressive targets and is committed to serving with its energy through volunteerism, education and employment initiatives, philanthropy, emission reductions and economic progress. Information about DTE is available at dteenergy.com, empoweringmichigan.com, x.com/DTE_Energy and facebook.com/dteenergy

    For further information, analysts may call:
    Matt Krupinski, DTE Energy: 313.235.6649
    John Dermody, DTE Energy: 313.235.8750

    The MIL Network

  • MIL-OSI: Sky Quarry to Begin Trading Publicly on NASDAQ

    Source: GlobeNewswire (MIL-OSI)

    WOODS CROSS, Utah, Oct. 10, 2024 (GLOBE NEWSWIRE) — Sky Quarry Inc. (NASDAQ: SKYQ) (“Sky Quarry,” “SQI,” or the “Company”), an oil production, refining, and development-stage environmental remediation company formed to deploy technologies to facilitate the recycling of waste asphalt shingles and remediation of oil-saturated sands and soils, announced that its common stock will begin trading on the NASDAQ Capital Market today, October 10, 2024, at approximately 11:00am EST under the ticker symbol “SKYQ”.

    On October 9, 2024, Sky Quarry announced it closed a Public Offering of $6,708,030 through the sale of 1,118,005 shares of its Common Stock priced at $6.00 per share.

    Digital Offering, LLC, acted as the lead managing selling agent. Clyde Snow & Sessions, PC acted as counsel to Sky Quarry and Bevilacqua PLLC acted as counsel for the managing selling agent.

    For more information and additional investor materials, please visit the Company’s investor relations website here.

    About Sky Quarry Inc.

    Sky Quarry Inc. and its subsidiaries are, collectively, an oil production, refining, and a development-stage environmental remediation company formed to deploy technologies to facilitate the recycling of waste asphalt shingles and remediation of oil-saturated sands and soils. Our waste-to-energy mission is to repurpose and upcycle millions of tons of asphalt shingle waste, diverting them from landfills. By doing so, we can contribute to improved waste management, promote resource efficiency, conserve natural resources, and reduce environmental impact. For more information, please visit http://www.skyquarry.com.

    Forward-Looking Statements

    This press release may include ”forward-looking statements.” All statements pertaining to our future financial and/or operating results, future events, or future developments may constitute forward-looking statements. The statements may be identified by words such as “expect,” “look forward to,” “anticipate,” “intend,” “plan,” “believe,” “seek,” “estimate,” “will,” “project,” or words of similar meaning. Such statements are based on the current expectations and certain assumptions of our management, of which many are beyond control. These are subject to a number of risks, uncertainties, and factors, including but not limited to those described in disclosures. Should one or more of these risks or uncertainties materialize or should underlying expectations not occur or assumptions prove incorrect, actual results, performance, or our achievements may (negatively or positively) vary materially from those described explicitly or implicitly in the relevant forward-looking statement. We neither intend, nor assume any obligation, to update or revise these forward-looking statements in light of developments which differ from those anticipated. You are urged to carefully review and consider any cautionary statements and other disclosures, including the statements made under the heading “Risk Factors” and elsewhere in the offering statement filed with the SEC. Forward-looking statements speak only as of the date of the document in which they are contained.

    Investor Relations
    Chris Tyson
    Executive Vice President
    MZ Group – MZ North America
    949-491-8235
    SKYQ@mzgroup.us
    http://www.mzgroup.us

    Company Website

    https://investor.skyquarry.com/

    The MIL Network

  • MIL-OSI New Zealand: Whangārei Police deal blow to core group of offenders

    Source: New Zealand Police (District News)

    Police have made further arrests over a recent spate of offending across the Kaipara and Whangārei regions.

    Four recent arrests will see offenders held to account over the majority of recent aggravated robberies and burglaries at various businesses.

    Combined efforts between frontline staff and the Tactical Crime Unit have resulted in dozens of charges being laid, Area Commander Inspector Maria Nordstrom says.

    “Late on Saturday night, frontline staff stopped a vehicle a Te Kamo petrol station forecourt which was sought in connection with an earlier road rage incident in Auckland.

    “The occupants were arrested without further incident and a firearm was located following a search of the vehicle.”

    A 17-year-old in the vehicle was sought in connection with an aggravated robbery at an Otaika dairy in early July.

    He will face the Whangārei Youth Court for that offence, as well as charges for unlawful possession of a firearm and ammunition.

    “The Tactical Crime Unit has also charged him over numerous burglaries and theft of motor vehicles across the region between late June and July,” Inspector Nordstrom says.

    This follows an arrest made by local Dargaville staff days earlier of a prolific offender.

    Inspector Nordstrom says the 44-year-old man is allegedly responsible for some 20 offences across the Dargaville and Whangārei areas over the past month.

    “Our staff located a stolen vehicle travelling near Tangowahine, and later arrested the man.

    “He’s since had an initial appearance in the Whangārei District Court on burglary charges where he allegedly targeted clothing, food and jewellery.”

    Police successfully opposed the man’s bail, and he has been held in custody until next appearance on 21 October.

    “Dargaville staff have been working incredibly hard in investigating these offences, and it was a great result for the community that he is remanded in custody.”

    Late last month Police also caught up with a 15-year-old male who had also committed offending alongside another youth, who was arrested in late August.

    Police colleagues in Hutt Valley spoke with the male, and he has since been referred to Youth Aid over a series of aggravated robberies and burglaries.

    “I acknowledge the dedication of our staff working right across this region, who have diligently been piecing together the offences leading to arrests,” Inspector Nordstrom says.

    ENDS.

    Jarred Williamson/NZ Police

    MIL OSI New Zealand News

  • MIL-OSI New Zealand: Govt broadly accepts Royal Commission findings

    Source: New Zealand Government

    The Government has broadly accepted the findings of the Royal Commission of Inquiry into Abuse in Care whilst continuing to consider and respond to its recommendations.

    “It is clear the Crown utterly failed thousands of brave New Zealanders. As a society and as the State we should have done better. This Government is determined to do better,” Lead Coordination Minister Erica Stanford says.

    “We broadly accept the findings of the report. Further work is required to respond to those findings that are legal in nature. In the meantime, we are focused on delivering a considered and comprehensive response to the recommendations.”

    The Government is currently working through the 138 recommendations and the 95 recommendations from the 2021 interim report on redress. 

    “Since the tabling of Whanaketia on 24 July, we acknowledged some children and young people experienced torture at the Lake Alice Unit and set up urgent financial assistance to those survivors who are terminally ill.

    A Crown Response Office has also been established to drive the Government’s ongoing response and the Prime Minister will publicly apologise to abuse in care survivors in Parliament on 12 November,” Ms Stanford says.

    The abuse perpetuated on survivors for decades is a debt that can never be repaid. I acknowledge the Royal Commission process has spanned six years and survivors would like to see action. The recommendations are complex and it’s important they are considered carefully and respectfully.”

    MIL OSI New Zealand News

  • MIL-OSI: YY Group Holdings Limited Successfully Regains NASDAQ Compliance

    Source: GlobeNewswire (MIL-OSI)

    SINGAPORE, Oct. 10, 2024 (GLOBE NEWSWIRE) — YY Group Holding Limited (NASDAQ: YYGH) (“YY Group”, “YYGH”, or the “Company”), is pleased to announce that the company has regained compliance with NASDAQ’s Minimum Bid Price Rule, maintaining a consistent stock price above $1.00 for more than 12 consecutive business days.

    This achievement marks a key milestone in YYGH’s continued growth and recovery, after experiencing a low of $0.71 two months ago. The stock has risen by over 70%, to reach a peak at $1.295, averaging at $1.20 for the past 2 weeks, representing a significant improvement over the past 60 days. This growth highlights the market’s renewed confidence in the Company’s vision and the strength of its business model.

    Investor Support Key to Recovery

    The Company attributes this success to the unwavering support of its investors. In a market characterized by volatility, YYGH’s ability to stabilize and grow its stock price would not have been possible without the trust and confidence of its shareholders. The Company’s leadership recognizes the importance of its investor relationships and is committed to delivering long-term value through strategic initiatives and operational excellence.

    Chief Executive Officer and Executive Director, Mike Fu, expressed his gratitude, stating: “We are incredibly grateful for the support of our investors during this crucial time. Their confidence in YY Group’s future has been a vital component of our ability to regain compliance with NASDAQ’s standards. As we look ahead, our commitment to innovation, excellence, and shareholder value remains stronger than ever.”

    Looking Ahead

    As part of its forward strategy, YYGH is dedicated to driving sustainable growth by leveraging technological advancements and exploring opportunities in new markets. The recent expansions into the UAE have resulted in positive outcomes with contracts signed with 5-star hotels such as Sofitel Al Hamra and DoubleTree by Hilton. Excitedly, the company has also expanded into the European market, with the United Kingdom as its first point of entry.

    About YY Holdings Limited:
    YY Group Holding Limited is a Singapore-based data and technology-driven company that specializes in creating enterprise intelligent labor matching services and smart cleaning solutions. Rooted in innovation and a commitment to user-centric experiences, YY Circle leverages app-based technology to optimize the labor sourcing market and the Internet of Things to revolutionize the cleaning industry.

    For more information on the Company, please log on to https://yygroupholding.com/.

    Investor Contact:
    Phua Zhi Yong, Chief Financial Officer
    YY Group
    Enquiries@yygroupholding.com

    The MIL Network