Connect with us

Tech

Cybersecurity Alert: CrowdStrike Faces Data Leak Threat from Hackers

Published

on

man people night dark. CrowdStrike Data Breach
Photo by Mikhail Nilov on Pexels.com

In a startling revelation, CrowdStrike, one of the leading cybersecurity firms in the United States, has reported that hackers have threatened to leak sensitive information about adversary groups it monitors. The situation escalated as the company confirmed that some of its private data had already been released online, raising significant concerns about cybersecurity protocols and the implications of such breaches.

The Nature of the Leak

On Wednesday evening, CrowdStrike disclosed that an internal database detailing the hacker groups it tracks had been compromised. This leak includes a wealth of information, echoing some data that the company has publicly shared in the past. The leaked details list 244 notable hacker groups, specifying their activity status (active, inactive, or retired), country of origin, targeted industries, and whether they are affiliated with government entities, hacktivist movements, or operate as independent cybercriminals.

@stmblog

Services Resume After Global Computer Outage Disrupts Valley Airports and Phoenix Police ♬ original sound – STMDailyNews

However, the hacker, operating under the alias “USDoD,” claims to possess even more sensitive data, including a list of “Indicators of Compromise.” These indicators are crucial digital footprints cybersecurity experts rely on to trace the activities of hacker groups. While the cybersecurity community often encounters threats of data leaks, it is uncommon for a major firm like CrowdStrike to publicly acknowledge such a claim without refuting it

The Implications of the Breach

The hacker group USDoD posted the leaked information on BreachForums, a well-known English-language hacker forum, which has raised alarms among cybersecurity experts. Although the leaked database was current as of June, CrowdStrike indicated it had been updated in July, suggesting that the breach occurred recently.

CrowdStrike’s acknowledgment of the leak is particularly concerning given the company’s recent history. Just days before this revelation, the firm faced criticism for a massive computer system crash attributed to a routine software update that inadvertently included a coding error. This incident affected an estimated 8.5 million Windows computers, disrupting operations across various sectors, including airlines, hospitals, and even the ticketing system for the upcoming Paris Olympics.

While CrowdStrike has asserted that the data breach is separate from the software glitch, the proximity of these events raises questions about the company’s cybersecurity measures and overall resilience against threats.

The Bigger Picture

The emergence of this data leak serves as a potent reminder of the ongoing challenges facing cybersecurity firms. As more sensitive information comes under threat, the implications extend beyond just the companies affected; they ripple through industries and economies.

Cybercriminals often exploit current events for personal gain and recognition, making it imperative for organizations to remain vigilant and proactive in their cybersecurity strategies. This incident underscores the importance of robust data protection measures and the need for continuous monitoring and rapid response capabilities.

Advertisement
image 101376000 12222003

Moving Forward

As the situation develops, CrowdStrike continues to monitor the threat posed by USDoD and the potential fallout from the leaked information. For organizations and individuals alike, this incident serves as a wake-up call to prioritize cybersecurity and to remain aware of the evolving tactics employed by cybercriminals.

The implications of this data leak go beyond CrowdStrike; they remind us of the ever-present vulnerabilities in our digital landscape. As we navigate an increasingly interconnected world, staying informed and adopting best practices in cybersecurity is more crucial than ever.

Read more about this story on the NBC News website: https://www.nbcnews.com/tech/security/crowdstrike-says-hackers-are-threatening-leak-sensitive-information-ad-rcna163675

https://stmdailynews.com/uspto-update-on-crowdstrike-it-outage-a-commitment-to-recovery/

STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories celebrating the positive impacts shaping our world.

https://stmdailynews.com/category/stories-this-moment

Authors


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Hal Machina is a passionate writer, blogger, and self-proclaimed journalist who explores the intersection of science, tech, and futurism. Join him on a journey into innovative ideas and groundbreaking discoveries!

Lifestyle

Engineering students explore how to ethically design and locate nuclear facilities in this college course

Published

on

nuclear plant
While nuclear power can reap enormous benefits, it also comes with some risks. Michel Gounot/GODONG/Stone via Getty Images
Aditi Verma, University of Michigan and Katie Snyder, University of Michigan Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

Title of course:

Socially Engaged Design of Nuclear Energy Technologies

What prompted the idea for the course?

The two of us had some experience with participatory design coming into this course, and we had a shared interest in bringing virtual reality into a first-year design class at the University of Michigan. It seemed like a good fit to help students learn about nuclear technologies, given that hands-on experience can be difficult to provide in that context. We both wanted to teach students about the social and environmental implications of engineering work, too. Aditi is a nuclear engineer and had been using participatory design in her research, and Katie had been teaching ethics and design to engineering students for many years.

What does the course explore?

Broadly, the course explores engineering design. We introduce our students to the principles of nuclear engineering and energy systems design, and we go through ethical concerns. They also learn communication strategies – like writing for different audiences. Students learn to design the exterior features of nuclear energy facilities in collaboration with local communities. The course focuses on a different nuclear energy technology each year. In the first year, the focus was on fusion energy systems. In fall 2024, we looked at locating nuclear microreactors near local communities. The main project was to collaboratively decide where a microreactor might be sited, what it might look like, and what outcomes the community would like to see versus which would cause concern. Students also think about designing nuclear systems with both future generations and a shared common good in mind. The class explores engineering as a sociotechnical practice – meaning that technologies are not neutral. They shape and affect social life, for better and for worse. To us, a sociotechnical engineer is someone who adheres to scientific and engineering fundamentals, communicates ethically and designs in collaboration with the people who are likely to be affected by their work. In class, we help our students reflect on these challenges and responsibilities.

Why is this course relevant now?

Nuclear energy system design is advancing quickly, allowing engineers to rethink how they approach design. Fusion energy systems and fission microreactors are two areas of rapidly evolving innovation. Microreactors are smaller than traditional nuclear energy systems, so planners can place them closer to communities. These smaller reactors will likely be safer to run and operate, and may be a good fit for rural communities looking to transition to carbon-neutral energy systems. But for the needs, concerns and knowledge of local people to shape the design process, local communities need to be involved in these reactor siting and design conversations.
A woman wearing a black VR headset, which looks like a large, bulky pair of glasses with no lenses.
Students in the course explore nuclear facilities in virtual reality. Thomas Barwick/DigitalVision via Getty Images

What materials does the course feature?

We use virtual reality models of both fission and fusion reactors, along with models of energy system facilities. AI image generators are helpful for rapid prototyping – we have used these in class with students and in workshops. This year, we are also inviting students to do some hands-on prototyping with scrap materials for a project on nuclear energy systems.

What will the course prepare students to do?

Students leave the course understanding that community engagement is an essential – not optional – component of good design. We equip students to approach technology use and development with users’ needs and concerns in mind. Specifically, they learn how to engage with and observe communities using ethical, respectful methods that align with the university’s engineering research standards.

What’s a critical lesson from the course?

As instructors, we have an opportunity – and probably also an obligation – to learn from students as much as we are teaching them course content. Gen Z students have grown up with environmental and social concerns as centerpieces of their media diets, and we’ve noticed that they tend to be more strongly invested in these topics than previous generations of engineering students. Aditi Verma, Assistant Professor of Nuclear Engineering and Radiological Sciences, University of Michigan and Katie Snyder, Lecturer III in Technical Communication, College of Engineering, University of Michigan This article is republished from The Conversation under a Creative Commons license. Read the original article.

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

STM Blog

AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them

Arryn Robbins discusses the challenges of recognizing AI-generated images due to human cognitive limitations and inattentional blindness, emphasizing the importance of critical thinking in a visually fast-paced online environment.

Published

on

Arryn Robbins, University of Richmond

I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.

For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for a minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors, such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.

These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?

As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.

With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.

We’ve been here before

The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.

Advertisement
image 101376000 12222003

But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”

Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.

In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.

Attention shapes what you see, what you miss

Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.

Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.

This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.

People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.

Advertisement
image 101376000 12222003
If you’re counting how many passes the people in white make, do you even notice someone walk through in a gorilla suit?

Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.

Efficiency over accuracy in thinking

Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.

Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.

While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.

Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.

Beating the machine

While AI gets better at detecting AI, humans need tools to do the same. Here’s how:

Advertisement
image 101376000 12222003
  1. Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
  2. Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
  3. Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
  4. Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.

AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.


A beautiful kitchen to scroll past – but check out the clock. Tiny Homes via Facebook
AI-generated images

https://stmdailynews.com/space-force-faces-new-challenge-tracking-debris-from-intelsat-33e-breakdown/

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Consumer Corner

NIKON ANNOUNCES THE NIKON Z5II: LET FULL-FRAME BE THE NEXT STEP IN YOUR CREATIVE JOURNEY

Published

on

Amplify any Aesthetic & Unleash Your Full Creative Potential with the Most Feature-Packed Camera in its Class

MELVILLE, N.Y. /PRNewswire/ — Today, Nikon announced the new full-frame / FX format Z5II, an entirely new generation of intermediate-level camera that miraculously manages to fit the latest high-end features into a lightweight camera body that will help kickstart any creative spark. The Nikon Z5II is the easiest way to level up a user’s captures with full-frame image quality, incredibly fast and intelligent autofocus (AF), excellent low-light performance, one-touch film-inspired color presets and the brightest viewfinder of any competing camera.1 The new Nikon Z5II uses the same high-power EXPEED 7 image processing engine as Nikon’s highest caliber professional models, the Z8 and Z9. The benefits of this processor are immediately apparent, affording incredible levels of performance and extremely fast AF with subject detection powered by deep learning (AI) technology. This highly accurate, high-speed focus is a massive leap from its predecessor, locking in at approximately one thirdthe time. In addition, the new camera now utilizes a highly sensitive back-illuminated (BSI) CMOS sensor for beautiful rendering of textures and details, even in dimly lit situations such as indoors or nighttime landscapes, with minimal noise. The Z5II further fuels your creative drive with a dedicated Picture Control button and innovative tools like Imaging Recipes and Flexible Color Picture Controls, all of which help users create a truly distinctive look with unparalleled creative control of colors. “The benefits of the Z5II go far beyond its attainable price and small size, offering users the benefits of our most advanced EXPEED 7 processing engine, a proven full-frame sensor along with unexpected pro-level features and performance,” said Fumiko Kawabata, Sr. Vice President of Marketing and Planning, Nikon Inc. “This is the camera many people have been waiting for in order to make the move to mirrorless, since nothing comes close to matching the value of features and performance in its class.” Reliably Fast Focus and Performance The AF on the Nikon Z5II is remarkably precise and super-fast, effortlessly locking-on and tracking a wide range of moving subjects. From fast-paced portraits or action shots, the system helps you to never miss a crucial moment, even when a subject is backlit. The cutting-edge AF system can detect up to nine types of subjects for stills and video, including people (faces, eyes, heads, and upper bodies), dogs, cats, birds, cars, motorcycles, bicycles, airplanes, and trains. But it’s not just the focus that’s fast—thanks to the next-generation processing power, the Z5II also offers high performance features from pro level Z models, to excel in any shoot.
  • 3D-tracking AF mode keeps the target subject in focus even if it moves rapidly or erratically. This allows for subject tracking, even at high burst speeds, for sharp images again and again when photographing sports, animals or other fast-moving subjects.
  • The first full-frame mirrorless Nikon camera with AF-A focus mode. In this mode, the camera automatically switches between AF-S and AF-C focus modes in response to subject movement or changes in composition with still shooting. This allows the camera to automatically focus on the subject, with no setting adjustments when photographing. This new feature makes it simple to photograph pets, kids or other subjects whose movements are difficult to predict.
  • Fast continuous shooting speedswith a maximum frame rate of 14 frames per second in mechanical shutter mode and up to 15 or 30 frames per second (electronic shutter) with full autofocus.
  • Pre-Release Capturefunction when shooting in C15 and C30 modes is capable of recording images buffered up to one second before the shutter-release button is fully pressed, capturing the action before a user can react.
Embrace Low Light Like Never Before There’s no need to be afraid of the dark with the Nikon Z5II. Featuring a powerful combination of the full-frame back-illuminated CMOS sensor and the EXPEED 7 image-processing engine, the Z5II delivers the best low light ability in its class. Images and video are rendered with minimal noise, and incredible AF detection abilities in low light. Whether shooting indoors, twilight cityscapes or the night sky, the Z5II is built to help you capture confidently in nearly any light, preserving details and textures throughout the broad ISO range.
  • Class-leading autofocus detection down to -10EVdelivers accurate, reliable focus in dim and dark conditions—great for concerts, live performances, festivities, available light portraiture, astrophotography and more.
  • A broad standard ISO sensitivity range of 100-64,000, expandable to Hi 1.7 (ISO 204,800 equivalent), delivers exceptional low-light capabilities and outstanding image quality with minimal noise. The max ISO is 51,200 for video recording.
  • The 5-axis in-camera vibration reduction (VR) system provides superior image stabilization equivalent to a 7.5-stop6 increase in shutter speed at the center and a 6.0-stop increase at the peripheral areas of the frame. This allows users to create with confidence in lower light and get sharper results, even when handheld or at lower shutter speeds.
  • Focus-point VR7 tailors stabilization to the area covered by the active AF point, for sharp rendering of the subject, even when it is positioned near the edge of the frame.
  • Starlight View Mode makes focus and composition simple in extremely low light, while the Warm Color Display Mode helps preserve night vision when working in complete darkness.
  • Extended shutter speeds up to 900 seconds (15 minutes) in manual exposure mode. Perfect for extreme long-exposure nightscapes and star trails.
Engineered to be Used, Made to be Loved Shooting with the compact and lightweight Z5II is a satisfying and comfortable experience. The electronic viewfinder (EVF) is simply stunning and is 6x brighter than any competing model. At up to 3000 nits brightness, users can easily shoot even in the brightest direct sun with a perfect view of the frame, with real-time exposure information. Additionally, the rear 3.2″ Vari-angle LCD touchscreen rotates freely to nearly any angle, giving full freedom of composition. Get down in the street or hold it high above everyone’s heads and still be able to accurately frame the perfect shot. The grip is deep and comfortable to minimize fatigue. Additionally, the Nikon Z5II’s front, back, and top covers are made from magnesium alloy, which delivers exceptional durability and outstanding dust-and drip-resistance. Feel the Color with Picture Controls The Nikon Z5II is the latest camera to support one-button access to Picture Controls, plus compatibility with the Nikon Imaging Cloud. The dedicated Picture Control button opens new possibilities for expressive color, with imaginative film-inspired looks that instantly change the color tone and color of a scene. In a single press, the user can see in real-time the effects of up to 31 built-in color presets plus Imaging Recipes downloaded by the user. Nikon Imaging Cloud connectivity allows users to download a wide variety of free Imaging Recipes by Nikon and created by popular creators, and to apply these recipes when shooting. In addition, the Z5II supports Flexible Color Picture Control, which allows users to create their own unique color styles using Nikon’s free NX Studio software. Flexible Color allows for a wider variety of color and tone adjustments, including hue, brightness and contrast. What’s more, these settings can also be saved as Custom Picture Controls that can be imported to the Z5II for use while shooting. Powerful Video Features for Hybrid Users The Z5II offers an impressive array of video features for content creators:
  • Capture immensely vivid and detailed 4K/30 UHD video, with no crop. This gives creatives the ability to shoot in 4K at full-frame, with more wide-angle freedom. For higher frame rates, the camera can also capture up to 4K/60 with a 1.5x crop.
  • Flexible in-camera video recording options with 12-bit N-RAW8, 10-bit H.265, and 8-bit H.264. This is the first camera to be able to record N-RAW to an SD card.
  • N-Log9 tone modes offer greater flexibility for color grading. This means Z5II users also have access to the free RED LUTs, which were developed in collaboration with RED for users to enjoy cinematic looks.
  • Full HD/120p for flexibility to create 5x slow motion videos in 8-bit H.264.
  • Hi-Res Zoom10 uses 4K resolution to zoom up to 2X in-camera during Full HD shooting, without any loss of quality. This is useful when using prime lenses to get closer to a subject and add a dynamic look to footage.
  • Product Review Mode will seamlessly switch focus between the user and any objects that they hold up to the camera. Users can even customize the size of the active AF area.
  • Upgrade streaming while connected via UVC/UAC-compliant USB port, transforming the camera into a high-quality webcam for live streaming.
  • The Z5II also includes ports for headphones and microphones.
Additional Features of the Nikon Z5II
  • Dual SD card slots
  • Bird detection mode makes it easier to detect birds in motion and in flight.
  • Equipped with Nikon’s exclusive portrait functions, including Rich Tone Portrait that realizes radiant and beautiful rendering of skin textures, and Skin Softening that smooths the skin while leaving hair, eyes, and other details sharp.
  • Capture high-resolution images with Pixel Shift shooting11 to portray stunning depth and rich textures, from architectural details to rocky landscapes and vibrant artwork, creating images at a staggeringly high resolution of up to approx. 96-megapixels (must be processed with free Studio NX software).
Adobe Creative Cloud Promotion For a limited time, customers who purchase the Nikon Z5II and register their camera will also get 1 year of Lightroom + 1TB of Adobe Creative Cloud storage. For more information and terms of this promotion, please visit www.nikonusa.com. Price and Availability The new Nikon Z5II full-frame mirrorless camera will be available in April 2025 for a suggested retail price (SRP) of $1699.95* for the body only. Kit configurations include the NIKKOR Z 24-50mm f/4-6.3 lens for $1999.95* SRP, and the NIKKOR Z 24-200mm f/4-6.3 VR lens for $2499.95* SRP. For more information about the latest Nikon products, including the vast collection of NIKKOR Z lenses and the entire line of Z series cameras, please visit www.nikonusa.com About Nikon  Nikon Inc. is a world leader in digital imaging, precision optics and technologies for photo and video capture; globally recognized for setting new standards in product design and performance for an award-winning array of equipment that enables visual storytelling and content creation. Nikon Inc. distributes consumer and professional Z Series mirrorless cameras, digital SLR cameras, a vast array of NIKKOR and NIKKOR Z lenses, Speedlights and system accessories, Nikon COOLPIX® compact digital cameras and Nikon software products. For more information, dial (800) NIKON-US or visit www.nikonusa.com, which links all levels of photographers and visual storytellers to the Web’s most comprehensive learning and sharing communities.
Connect with Nikon on Facebook YouTube Instagram Threads , and TikTok. Specifications, equipment, and release dates are subject to change without any notice or obligation on the part of the manufacturer. 1. Based on Nikon research as of April 3rd, 2025.
2. Measured in accordance with CIPA standards. The measurement values are based on the following testing conditions: Subject brightness of 10 EV; in photo mode using aperture-priority auto (A), single-servo AF (AF-S), single-point AF (center), at 70-mm focal length with the NIKKOR Z 24-70mm f/4 S.
3. Max framerate of 14 fps in mechanical shutter mode is JPEG only. C15 and C30 are JPEG only, electronic shutter modes. Rolling shutter distortion may occur depending on the type of subject and shooting conditions.
4. Available only with JPEG recording.
5. In photo mode using single-servo AF (AF-S), single-point AF (center), at ISO 100 equivalent and a temperature of 20°C/68°F with an f/1.2 lens.
6. Based on CIPA 2024 Standard. Yaw/pitch/roll compensation performance when using the NIKKOR Z 24-120mm f/4 S (telephoto end, NORMAL)
7. Only in photo mode with NIKKOR Z lenses not equipped with VR. Does not function when multiple focus points are displayed.
8. When a frame size and rate of [[FX] 4032×2268 30p], [[FX] 4032×2268 25p], [[FX] 4032×2268 24p], [[DX] 3984×2240 30p], [[DX] 3984×2240 25p], or [[DX] 3984×2240 24p] is selected for [Frame size/frame rate] in the video recording menu. Picture quality is equivalent to that of a video quality setting of [Normal]. Use of Video Speed Class 90 (V90) SD memory cards is recommended.
9. When [H.265 10-bit (MOV)] or [N-RAW 12-bit (NEV)] is selected for [Video file type] in the video recording menu.
10. Hi-Res Zoom is available when all the following conditions are met:- [FX] is selected for [Image area] > [Choose image area] in the video recording menu, [H.265 10-bit (MOV)], [H.265 8-bit (MOV)], or [H.264 8-bit (MP4)] is selected for [Video file type] in the video recording menu, and – a frame size and rate of [1920×1080; 30p], [1920×1080; 25p], or [1920×1080; 24p] is selected for [Frame size/frame rate] in the video recording menu.
11. Both the subject and the camera must be still. RAW images shot with the pixel shift shooting feature must be combined using NX Studio.
*SRP (Suggested Retail Price) listed only as a suggestion. Actual prices are set by dealers and are subject to change at any time. SOURCE Nikon Inc.

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending