PCPartPicker

  • Log In
  • Register

Account

Comment Karma

2180

Comments

8680

Topics

172

Build Guides

24

Completed Builds

0

Created

Oct. 31, 2015, 1:48 p.m.

About 526christian

Wow, 3 years on PCPartPicker already?

Intro

I'm just someone with an internet connection and sometimes a little bit of free time.

If you have a question, message me and I will do my best to answer it. Sometimes, I might forget about you and not reply to a message or comment reply of yours to me. Sometimes I'm messaging or replying to someone through inbox, and your message or comment gets buried beneath and I never notice. If you really want a reply from me, then try messaging or replying to me again.

Even if I haven't commented recently, I still come here and lurk.

Also, you can find me on steam. You can also find me in the official PCPP Discord.

I am also on Reddit. There, I post mostly things involving diet and food. See: An overview of common “low-carb” sweeteners: Usefulness, relevance to low-carb diets, and potential health impacts, my "general nutrition collection", and my "keto jumpstarter", as well as this recipe, this recipe, and this recipe.

I am also, as of now, a contributing writer for Logical Increments' blog.

Primary contents of my profile:

Intro

Primary contents of my profile

Using the site visual tutorials

Forum posts that may interest you

Threads that I put way too much time into

Semi-comprehensive list of notable users who are or were once active

Staff

Myths, misconceptions, and misinformation in hardware

Tips on expanding your knowledge

Equations and stuff

Using the site visual tutorials:

Part List Linking Tutorial
Markup Tutorial
Saving/Saved Part Lists Tutorial
Favorite Parts Tutorial
Parametric Filter Tutorial

Forum posts that may interest you.

How to make a decent build guide - by fn230
Forum formatting guide - by FH100
A Quick Overview of Mechanical Keyboards - by PhantomTaco

Threads that I put way too much time into

526christian's learning thread

This thread contains links for people to learn a bit about things like CPUs, monitors, computer fans, SSDs, and more. For each section, I ordered and categorized in such a way that you can more easily progress in your knowledge of a given thing. I also listed good reviewers for various things near the bottom. Last updated 12/2/2018.

On CPU/GPU bottlenecking in games

This post focuses on explaining bottlenecking between a CPU and GPU in games. It clarifies the role of the CPU and GPU in games and their relationship, goes over some general concepts related to CPU/GPU bottlenecking, game-related factors that may cause the bottleneck to switch / put more load on one or both components, ways to alleviate a CPU or GPU bottleneck, and how to identify which part is the current bottleneck.

526christian's Complementary Thread

This thread is complementary to my learning thread, as indicated by the name. It serves to fill in many gaps caused by the learning thread's requirements for additions / specific focus. This post fills in important gaps knowledge-wise, has how-tos for things like troubleshooting, building a PC, storage-related situations like upgrading to an SSD, and more. A must-use alongside the learning thread. Last updated 12/2/2018.

Main Memory: A Primer (UPDATE NEEDED)

This post, which I really wish I named differently, goes into detail about main memory, including its role in a computer, specs, physical organization, the basics of its operation, latency and bandwidth and their factors, compatibility in modern systems, and the major changes across different SDRAM generations. This post is a constant, large wave of information, so reading it with mental breaks is advised.

RAM and its effects in games redux

A re-do and re-imagining of a post I made (which still exists, by the way) in the relatively early days of my account and picking up PCs as a hobby. This thread focuses on main memory and how it affects gaming performance. It begins by talking about some general main memory concepts like its role, physical organization, specs, and performance, although of course in less detail than Main Memory: A Primer. It also discusses memory bottlenecking and why so many past (and current!) benchmarks on the effects of main memory in gaming were misleading. Then, it links to and analyzes benchmarks of varying DIMM speeds, channel configurations, rank counts, and timings in recent games. The post ends with a conclusion that summarizes what we can take away from current benchmarks and considerations to make when choosing memory for a gaming PC. Last updated 11/2/2018.

Modern CPU Microarchitecture: An Overview

An over 50k word long document, it focuses on CPUs from a hardware perspective, first going over many of the basics, then taking a more detailed look at each part and function of modern microprocessors, from the memory subsystem, cache, and registers, to the front-end and execution engine of cores, to the interconnects connecting cores and other components on the chip. The most comprehensive, singular source that covers CPUs and main memory on this level of depth that I am aware of. I also briefly ramble about the future of CPUs and main memory. Further reading is also recommended.

Modern GPU Microarchitecture OR Modern GPUs: A Technical Primer (title undecided)

A very much work-in-progress post that will be kind of like Modern CPU Microarchitecture: An Overview, but for GPUs. And shorter.

Unnamed post about choosing parts for quiet and silent PCs

Yeah. Work in progress.

Understanding SSD data reliability and security (no longer planned)

This one is still was in the note taking and research stage and will be started when I have more free time. The first half of the post would have focused on endurance, drive lifespan as a whole, failures, and errors, including the factors at play, misconceptions around them, and what is or can be done about them. The second half would have revolved around security, mainly the risk of data being accessed you might not want accessed, especially data that you want deleted. I might or might not still pick this up again.

On using PSU tier lists, and choosing a PSU effectively (no longer planned)

Might or might not actually do this one. It would discuss why you can't make useful comparisons of PSU quality with tier lists, the flaws and usefulness shortcomings of tier lists, and actually getting recommendations or choosing a PSU yourself (to some extent). I might or might not still pick this up again.

Semi-comprehensive list of notable users who are or were once active

Updated: 10/31/2018 to include a bunch of people I missed as well as an exhaustive list of now-inactive OG and pseudo-OG users. The inactive and deleted/banned lists are a good nostalgia trip for anyone who's been here long enough.

Currently active:

OrionFOTL (semi-active)
tiny_voices
fn230
Xorex64
floridaboz
RazerZ
rhali8 (semi-active)
gorkti200
Eltech
SuperGojira2001
Allan_M_Systems
PureBlackFire (semi-active)
Radox-0
vagabond139
pegotico
Rexper
tomtomj2
Cicero_
WirelessCables
MoreAlphaLine (formerly MoreAlphaLineGaming and AlphaLineGaming)
tragiktimes101
Gilroar (formerly Ada)
Wolfemane
Siwini
yawumpus
mark5916
m52nickerson
Chillsabre
Cobyfield
AceBalthazar (semi-active)
GentlemanShark
johngerges582 (semi-active)
rosswalker (semi-active)
G_I_L (formerly BlockySquidZ, that account being inactive)
AwesomeBuilderXE1901 (semi-active)
Vinyl_Scratch_ (semi-active)
Shakaron
manirellis_fridge (semi-active)
DPRutledge (semi-active)
romanvalkre
MichelWeber
elvenson (semi-active)
Warlock
MannyPCs (semi-active)
Enrico411
DarTroX (formerly DarTro)
Pawacorn
xPat

(semi-active) indicates someone who only comments sparingly or periodically.

Inactive (unfortunately):

will_rippey (now on new, inactive account not linked here to help keep it a secret; contactable)
IwannaPC (contactable)
Amazian
MisterJ (banned, now on new, inactive account)
bobby567
S0nny_WarBucks58
jhpcsb117 (banned/self-deleted and previously inactive)
colinreay
Matthewtina2015 (contactable)
piemancoder
larsG35
BigAll (contactable)
lisa_simpson (formerly dan_castellaneta; contactable)
Geode1010
Kokonip
~Pcjulian12343
JAShadic
~Chuck38
digah2750
ajcardiac
Phillip.Phillip
SilverWolf149 (contactable)
~Nuckles_56 (contactable)
LemonComputers
Ender_Mist
Charmin (contactable)
~commodore64
FHD32423 (contactable)
Growliff1234
pulltabking1971
~waffle502
FH100
Duke_Of_The_Giraffes
FACUNDO
Scotty0709
Jackson_Tyler96
dphillips157
0BSiD1AN
Unhandybirch656
noap7
mrluckypants96 (second account, all but one comment deleted)
Athenriel (contactable)

~Marks a user in the above "inactive" list who last commented within the past 4 months.

(contactable) indicates someone that I know can be contacted. Currently, that means someone who I am able to, either directly or indirectly (via someone else), contact through Discord or Steam.

Self-deleted or banned, but previously active:

RaspberryPiFan (formerly naynayr1)
TheOfficialCzex
ThatGoat
jipster69 (contactable)
Eurobeat
Spycrab
BaccaFly
Ravenhelm
Geekazoid
legonate416 (contactable)
buch88own
Bespers
tiedyetophat (contactable)
Randomperson51 (formerly imapie4688; contactable)
Cmdr_Tim
rolfejc

Staff:

"The Roast Master" - Philip
"Philip's Wrath" - Ryan
"The Designer" - Phil
"Windows Phone Guy" - Jack
"Another Ryan" - Alex
"Pre-build Pro" - Barry

And for a more mysterious staffer, there is Daniel.

There's also Jenny who is the relationship account manager. As far as I know, Jenny has no PCPP account.

Philip, Ryan, and Alex are all the most active and are the ones who you will see doing moderation and will once in a while interact with the community. Alex is normally the one who adds parts to the database, Ryan will more often respond to feedback and questions about the site, and Philip just... Does whatever Philip does.

There is also multiple unnamed staff who do not have accounts.

Ex-staff:

okp11, who was, according to Ryan, a summer intern.

Myths, misconceptions, and misinformation in hardware

M = myth, misconception, or misinformation

T = truth

CPUs

M: Hyper-threading / SMT adds fake cores that don't perform as well as real cores.

M: Threads are like weaker cores.

T:

These show a fundamental misunderstanding of what threads and hyper-threading and SMT are.

First thing's first, there are two things called a "thread": A thread of execution, or "software thread", and a hardware thread context, or more simply a "hardware thread".

A thread of execution is, on a low level, what performs work in software, consisting of a sequence of instructions.

A "hardware thread" is one of the most misunderstood things in CPUs. A "hardware thread" refers to the registers (the highest level of the memory hierarchy, which hold in-use data, instructions, and important information around the running program) which hold a thread's current "state" - that is, the configuration of information relevant to execution - as well as the program counter register which tells the core where to get more instructions from. Each "hardware thread" thus is the infrastructure which holds onto a thread's state and maintains its own stream of instructions, with access to a core's execution resources.

A "hardware thread" appears to the operating system as a "logical processor", also often called a "virtual core" or "logical core". As far as the operating system is concerned, each logical processor is like a CPU core, although that doesn't mean software threads are scheduled as such. A "hardware thread" is not something that has performance in the sense of it being something comparable.

SMT and hyper-threading (in truth, merely Intel's name for their own implementation of SMT) both involving a duplication of hardware threads. You see, it is rare for a single software thread to use all of a core's execution units, the actual hardware units which execute the instructions. The idea is, by having multiple hardware threads and thus having the operating system schedule multiple software threads onto a core simultaneously, we can "fill in the gaps" so to speak, increasing utilization of execution units and improving performance. Instructions are fetched for both threads, and executed simultaneously. There's no trick, there's no addition of resources. But, there is better use of what is already there.

For more info on SMT as well as threads to a limited extent, see Modern CPU Microarchitecture: An Overview.

M: Memory speed is very important for Ryzen because Infinity Fabric.

M: Ryzen CPUs benefit much more from faster memory.

T:

Let me preface this by saying, yes, Ryzen CPUs do tend to see better performance scaling with increasing main memory frequency. But, in truth, not by a significant extent as far as absolute numbers are concerned. Most people who say these things are saying it based off not only a combination of past, misleading, gaming-specific benchmarks on memory speeds on Intel systems, but also the hype around Ryzen's Infinity Fabric being clocked the same as the memCLK.

The situation where Infinity Fabric latency always causes an issue is latency to access main memory, since the IF is how cores and the integrated memory controller can communicate with each other, the cores sending data requests and the IMC sending responses. We see memory access latency decrease almost linearly as memory frequency increases in a Ryzen system, but to what extent the latency comes from the IMC itself (which runs at the same frequency as memory, so IMC-based latency should decrease linearly with frequency) is unknown and may be playing a role.

But, that's not what people are thinking of when saying memory speed is important or more useful on Ryzen CPUs. Most are basing it off communication being threads running on different CCX's (core complexes), since that is where a sizable latency is experienced (a much lower latency is experienced between cores in the same CCX). The problem with this is that any sort of inter-thread communication that invokes the Infinity Fabric would be in the form of cache coherence transactions. Message-passing communication is not used, besides a few exceptions not used by us consumers and prosumers, in multi-core CPUs nor between multiple CPUs in a multi-socket system.

Cache coherence, if you aren't familiar, is a feature of modern microprocessors which aims to ensure every core / processor is kept "up-to-date" on the state of any copy of a block of data. Different cores have their own caches where they might modify a block of data - a block of data that other cores in a multi-threaded program might also possess a copy of. Cache coherence is enforced through cache coherence transactions where messages are sent between cache controllers. This is the only sort of communication between cores involving direct transfers of data in a Ryzen CPU that might be happening, to my knowledge.

Thus, placing extra emphasis on main memory frequency in a Ryzen system on the basis of communication between CCX's assumes:

  1. A program is being run which has the parallelization for its threads to be scheduled across multiple CCX's. A CCX may have only 4 hardware threads, but may also have 8 if the CPU offers SMT. Both Windows and Linux schedulers schedule appropriately by placing a program's threads first in the same CCX before any other, so them being randomly placed is not a concern.

  2. That those threads scheduled on different CCX's both have copies of certain data.

  3. That those threads are modifying that shared data, and that a load operation in another core will access the data while it is in L2 cache (the cache level where cache coherence is usually implemented in multicore processors, as L1 is very performance-sensitive and shouldn't be slowed down by contention from other cores).

  4. That both 2 and 3 happen to a significant enough extent to cause a performance hit big enough to justify the cost of higher-frequency memory.

The cache coherence protocol used in Zen microarchitectures is MDOEFSI. We have pretty much no details about it and usual meanings for those letters implies some redundancy, so it's pretty confusing. The "I" implies Invalid. If it really does mean Invalid, then that would mean it is an invalidation-based protocol. If a shared block of cached data is modified by one thread, then shared copies are "invalidated" and won't be updated until the copy or copies are accessed. Then, and only then, would a cache coherence transaction take place, invoking the Infinity Fabric, and only across CCX's. If that invalidated block isn't accessed, this won't happen. If "I" doesn't stand for "Invalid", and the coherence protocol is actually update-based, Infinity Fabric performance traits will only matter if the out-of-date block is accessed before it is updated.

There is, however, another situation where cross-CCX latency plays a role: Thread migration. When the operating system's scheduler switches out a thread for whatever reason, it might schedule it onto a different core than the one it was on before. Schedulers will usually try to re-schedule a thread on the same core as data relevant to it might still be stored in cache, but sometimes there might be a higher-priority thread in place. It is usually more timely in terms of scheduling latency to simply schedule to another core. This comes at the cost of disassociating threads from their associated, cached data, but is usually still preferable performance-wise even then.

As I already said, Ryzen CPUs, in truth, do not display a significantly higher performance scaling with higher-frequency memory than Intel CPUs, although I do concede there is a slight difference. The issue is, the benefits are either over-hyped in regards to Ryzen, or underestimated in Intel's offerings, or both.

M: Intel's CPUs are better.

T: Patently ridiculous. AMD's Zen microarchitecture is easily competitive, and their CPUs great value. Their future iterations of Zen will likely only get better and even more competitive.

Monitors

M: You should get a monitor with a response time below [x arbitrary number]ms.

M: [This monitor]'s response time is lower than [this one]'s.

T:

Response time does not have a standard testing method, and nor is what is actually measured standard. Independent response time tests show inconsistency across different tests with the advertised response time, except in certain ones (presumably those used for the spec).

Regardless, response time is hardly a realistic concern in general. The only way it can cause a problem is in the form of ghosting - it has nothing to do with, say, input lag. If a given monitor is not known to have ghosting problems (response time in a given situation is longer than the refresh rate), then response time is not to be worried over.

SSDs

M: NVMe > SATA

T:

This is somewhat nitpicky, but...

NVMe is a host interface protocol for PCI-e interface SSDs, and SATA is an interface. AHCI is the host interface protocol used for SATA drives, and also some PCI-e SSDs prior to the introduction of NVMe. It'd be more accurate to say PCI-e > SATA, or NVMe PCI-e > SATA.

M: NVMe SSDs are better for boot times, game/software load times, blah blah blah.

T:

The typical user will not have an appreciable benefit from an NVMe PCI-e SSD over a conventional SATA 3 SSD.

Boot times do not show significant benefits. Unless you have an unrealistic amount of and size in software configured on startup, it is unlikely any appreciable improvement in boot times can be seen. In addition, video games in particular show only very limited benefits to loading times unless loading many, very high-resolution textures (to an extent that is well out of the norm). When using a modern SSD, PCI-e or otherwise, the primary bottleneck for load times in games is compute at the CPU - that is, decompressing assets (many games use a lot of compression in expectance of a slower storage drive like an HDD and to reduce storage needs) followed by actually setting up the scene and loading objects. On top of that, this is often done on only one thread. Similarly, other software rarely sees notable improvement in startup times. Not even any sort of video editing or rendering will realistically see benefits, unless editing in and exporting to large raw formats without enough main memory when using such a drive as scratch. Even high quality compression formats often used in professional, high-end applications can't tax a decent SATA 3 SSD.

Oftentimes, when further looking into claims that someone has experienced a noticeable benefit from an NVMe PCI-e SSD, it is revealed that the same person has their other SSD almost completely filled with data.

Someone who is perhaps working with very large images in photo editing software (leading to very high use of memory, specifically, above what the system has) and uses such a drive as a scratch drive can benefit.

Ultimately, NVMe is most useful in the enterprise sector - scale-out and relational databases, transaction processing, servers running dozens to thousands of virtual machines simultaneously for users, financial data processing, and so on. Some scientific applications like a few molecular dynamics software packages might also benefit. NVMe is not a big deal in the prosumer space, and certainly not a big deal for consumers. Modern software simply does not access storage in a way that leverages NVMe's massively parallel command processing and queuing ability - they don't access that much data at a time.

M: MLC SSDs > TLC SSDs

T:

SSDs are more than just the flash; you have the controller - the "brain" of the device - too. In order for such differences between flash types to reveal themselves fully in actual use, you need to be comparing SSDs with the same controller and firmware (think of the firmware like the SSD's operating system). The controller plays a significant role in SSD endurance, where in reality, endurance also depends a lot on the effectiveness of the controller's internal algorithms and ECC, not just the flash alone.

Samsung's TLC-based SSDs, and many other TLC-based SSDs for that matter, are very well capable of lasting through many hundreds of terabytes (often even to ~1.5 PB) of writes to NAND. Because of the factors outside of NAND type contributing to endurance, it's very much possible for TLC-based SSDs to outlast similar-capacity MLC-based ones in endurance. Sometimes they don't, sometimes they do. Like I said, you can't really compare flash type across SSDs when looking at endurance unless you're looking at ones with the same controller and firmware.

A similar idea can be applied to performance, as well. The controller, its firmware, and the parallelism (or ability to do multiple operations at once, including the channels between the controller and flash - something not influenced by flash type) available to the controller play an even bigger role than the flash type in just about any place and way you can measure performance of an SSD. Here's some food for thought: Oftentimes the longer latencies for writing data to similar-node TLC lead to poor write performance when the cache is full. However, direct-to-die writing can help minimize the impact, and SSDs such as the SK Hynix SL308 can write data just as fast as many similar-capacity (not PCI-e) MLC-based SSDs with a (or what would be) saturated cache by having a highly effective emulated SLC write caching algorithm. And realistically, a saturated cache while writing data is about the only time you'll notice a performance difference between TLC and MLC on a similar "platform", but despite that, a manufacturer's choice of a controller and firmware can diminish or effectively remove that only noticeable advantage that MLC might have.

Power supplies

M: Corsair's 2015 CXM's are dated/bad and you should get the 2017 CX.

T:

An argument based on either pure misinformation, or a misunderstanding of the relevance of the switching topology in the end product.

M: OPP does not work in EVGA B3s, making them fire hazards.

M: (a common response) OPP not working doesn't matter because you aren't pulling that much power from them.

T:

In the reviews reviews showing B3's have non-working OPP, only the 450W was a fire hazard, purely because the fuse did not blow. The others still failed from overload, but were not fire hazards and had functioning fuses. If all 450W B3's have non-working fuses, then that's a big problem regardless of the non-functional OPP - the point of the fuse is to prevent a fire hazard in the event of a fault. It doesn't have to be failure from overload because OPP doesn't work, it could be something entirely different.

Though nobody is realistically going to overload an EVGA B3 by way of a too power-heavy system, the OPP issue does still matter. Some graphics cards may also experience large jumps in power consumption that might overload the power supply and cause damage to the primary-side MOSFETs over time. Also, if there is a short on the outputs of the PSU for whatever reason (most likely component failure) that the short circuit protection can't detect, the OPP steps in to prevent overload and thus additional damage, though the real-life importance of that is somewhat dubious (you're going to be RMA'ing or throwing the PSU if it has a fault anyway).

Tips on expanding your knowledge

1. Learn google search tricks, and have good googling skills in general. Seriously, this can get you places.

My favorite tricks are:

Making google search specific wording. I do this by putting quotation marks around what I want it to search for letter-by letter.

Having google search a specific site or domain. I do this by just typing site: and whatever the website or domain is.

Using a minus sign before a word, which will make google ignore results with that word.

Using filetype:pdf to limit results to .pdf only, which makes it easier to find things like whitepapers, research papers, and university lecture slides.

2. Read reviews from professionals. Then read some more. You can learn a lot from them, whether it's about graphics cards, motherboards, or even power supplies. Sometimes in discussion threads about reviews you can find even more.

3. Don't be afraid to ask questions, no matter how stupid. Even the most experienced and knowledgeable had to start somewhere, and that place isn't where they are now.

Also,

4. When you finally learn something, it helps to ask yourself if it matters and how much it really matters if it does. This is a step that I didn't take early on and lots of others don't take that shouldn't be forgotten.

It also helps to understand how different areas of knowledge may play a role.

From my own observations, we can describe knowledge in regards to computers in three different categories:

  1. Practical.

  2. Technical.

  3. Working.

Practical describes knowledge of information that may be useful. For example, knowledge of different cooling setups and how they compare, or knowledge of how different core counts in CPUs play a role in real-world performance, both would fall under practical knowledge.

Technical knowledge describes knowledge of technical background information. This would be, in general, knowledge of "what", "how", and "why". Technical knowledge ties in heavily with practical knowledge. Technical knowledge can sometimes become practical knowledge, and to have a very high practical knowledge often requires a decent amount of technical knowledge, depending how deep you want to go. For example, knowing how rank interleave impacts gaming performance first requires knowing that rank interleave is a thing and how it can be enabled, as well as how bottlenecking in games works. But, knowing the practical differences between closed loop liquid coolers and heatsink fans only requires a basic knowledge of what they are.

Working knowledge describes familiarity and experience. Knowing how to troubleshoot, knowing how to overclock off-hand, knowing how to install Windows, knowing how to use UEFI settings. You get the idea. A limited amount of technical knowledge is required for a good working knowledge for the sake of knowing what things are and mean.

Equations and stuff

TDP * (OC MHz / Stock MHz) * (OC VCore / Stock VCore)2 = x

ex: 91 * (4600 / 3500) * (1.25 / 1.1)2 = 154 Watts

A very rough estimation of power consumption of an overclocked CPU or GPU under load. Of course it won't be perfect because VCore isn't constant and because of the frequent, small changes in load; but it helps give an idea of increase of power consumption. Realistically, actual power consumption will be lower than the estimation in use.

(Timing in clock cycles / I/O bus clock AKA real frequency) * 1,000 = x

ex: (16 / 1200) * 1,000 = 13.33 nanoseconds

OR

(Timing in clock cycles / Transfer rate in MT/s) * 2,000 = x

ex: (16 / 2400) * 2,000= 13.33 nanoseconds

This equation gives us the time in nanoseconds for a memory timing that is measured in clock cycles, such as CAS latency.

x = ((P/E Cycle Endurance * (1 + over-provisioning)) / (365 * DWPD * write amplification factor * compression ratio)

x = ((3,000) * (1 + 7%)) / (365 * 0.16 * 1.5 * 0.8)

x = 3,210 / 70.08

x = 45.8 years

An equation where X represents an estimated lifespan of an SSD in years. This isn't the most useful because the write amplification isn't always constant (but if host writes and NAND writes are tracked in SMART data, you can get an idea of your average WAF), and neither will DWPD be. However, it's good for showing a worst case scenario for an SSD to get an idea of how long it would last assuming it doesn't get killed by a random part failure, a power surge, or firmware corruption, which are all much more likely for the people who are reading this in the first place.

Over-provisioning in most client (consumer/prosumer) SSDs is 7% and 28% for enterprise-grade SSDs. This is set by the manufacturer and users have no control over it. DWPD is drive writes per day, or the ratio of the host writes per day in gigabytes compared to the advertised capacity of the SSD (for example, 80 gigabytes in a day divided by 500 gigabytes in storage). DWPD depends entirely on what you are doing with the SSD. The write amplification factor is the ratio of amount of data written to the NAND flash compared to the amount written by the host. This is always above 1, due to background operations like garbage collection, and will vary by workload. The compression ratio is the size of a given bunch of data when compressed by the SSD compared to uncompressed. In this equation, it's meant to be an average. An SSD controller that does not perform compression has a compression ratio of 1, and any that does will have one below that, depending on the effectiveness of the compression algorithm and the entropy of the data being written.

I/O bus clock in GHz AKA real memory frequency * 2 * number of memory channels used * 64 * 8-1

ex: 1.5 * 2 * 2 * 64 * 8-1 = 48 GB/s

This equation gives us the maximum theoretical memory throughput for a memory subsystem in gigabytes per second. This number will never actually be reached, however, as doing so would require data to be transferred every clock cycle. The example above shows us the max theoretical memory throughput for DDRx-3000 memory in a dual-channel configuration.