HEAVILY UTILIZED CLOUD
CLOUD SALES FACTS:
- About 120-130 million cloud CPUs and 200,000 cloud GPUs are sold annually
- Data center hardware updates are usually 18 months apart
- A full data center refresh can take 3 – 5 years to complete
- Service contracts must remain flexible to meet changing needs
CUSTOMERS NEED TO GET WHAT THEY NEED WHEN THEY NEED IT
READILY AVAILABLE CLIENT

CLIENT SALES FACTS:
COULD CLIENT DEVICES SHARE THE CLOUD WORKLOAD?
COMPUTING TODAY
- Hardware resources are separated as client devices and cloud services
- Content assigns roles to each resource
- The relationship is asynchronous
STANDARD ROLES
- Client device is 100% responsible for processing
- Cloud service is responsible for processing, and content is streamed to client devices through a network.
- Client device is responsible for processing, the cloud serves as central storage, a collective reference point, or a coordinator of client devices
SILOED EFFICIENCY
- A data center may assign computing resource needs to available hardware so that the cloud service is running efficiently. However, this efficiency may have little to do with the user experience
- At a client device level, content makers will try to get the most out of available CPU, GPU, and memory resources. However, the user experience will be limited to the capabilities of its local hardware.
ARE OUR CUSTOMERS FUTURE-PROOFED?
IS YES ALWAYS THE ANSWER?
- Could cloud scale fast enough to meet user experience needs?
- Are capabilities even between users?
- Is necessary data center growth sustainable (e.g. space, water, electricity)?
- Is there evidence that computing is fully equipped for use cases like remote work and play, artificial intelligence, and scientific research?
IS OUR ECOSYSTEM FUTURE-PROOFED?
IS YES ALWAYS THE ANSWER?
- Are there enough data center resources for business growth and customer needs?
- Given the complexity of how data center hardware is made and supported, can it scale fast enough?
- Are there additional methods for supporting customer growth and viability?
- Is the marketability and value of all computing sectors assured?
JOIN TIFCA TODAY AND LET’S FUTURE-PROOF TOGETHER
TIFCA wants customers to get the computing resources they need when they need it. We also want products and services to have a growing importance and value proposition as our ecosystem advances. Given the complexity and lengthy refresh times of cloud resources combined with the growing demand of use cases like artificial intelligence, remote work and play, VDI, and more, there is urgency to solve challenges to support these ambitions.
In addition to adding more cloud efficiency, the solution is to also leverage the capabilities of client devices and the networks that connect them. Billions of devices could be contributing to customers’ workloads, reducing network bandwidth, and freeing cloud resources so more ROI can be reaped across our industry and our industry’s products and services.
To do this, our ecosystem and its customers need to achieve Triparity. A two part word, “Tri” refers to the computing ingredients of client, cloud, and network. “Parity” is the pursuit of balance between these three elements. The “Parity” balance is what increases the ROI of computing infrastructure and makes it possible to do more for more users. The Triparity Initiative is TIFCA’s industry effort to develop a workload balancing framework that leverages our ecosystem in an efficient and sustainable manner. Join TIFCA today and let’s achieve Triparity together.
HYBRID CLIENT-CLOUD
It could be more efficient and productive to treat applications as a dynamically changing body of workloads. With an awareness of client, cloud, and network capabilities, workloads could be assigned to happen where they are most effective to happen at and between the client and cloud computing levels.
OPPORTUNITIES
- More user scalability and ROI from infrastructures
- More efficiency across client, cloud, and network
- More capabilities for more users
- Improved environmental sustainability
- Supports demand for new products and services across client, cloud, and network
- Supports growth of resource-intensive use cases
WHAT YOU GET
- Solve problems by collaborating with leading expertise across the industry
- Grow the ROI of products and services by enabling their Triparity future
- Prepare for the upcoming needs of businesses and customers
- Be seen as an industry leader that is enabling what’s next
THE TRIPARITY INITIATIVE COULD…
- Drive new sales potential across client, cloud, and network
- Liberate computing resource bottlenecks to support business and customer needs
- Help ensure importance and product value propositions in the future of computing
THERE IS URGENCY BECAUSE…
- Customers may need more compute resources
- Operational costs could increase if supply diminishes
- High value and high growth use cases need enough supply to be viable
- Scaled resources should have sustainability in mind
TRIPARITY’S ECOSYSTEM
The Triparity Initiative is developed by the CORA (Create Once Reach All) ecosystem. CORA is made up of supportive technologies, methods, and frameworks that deliver the same digital content across multiple platforms and devices. The Triparity Initiative is supportive of CORA thinking by striving to deliver content in its ideal form by leveraging client device, cloud service, content development, and network ecosystem capabilities.
YOU ARE CORA TOO!
• PC • Mobile • Console •
• Streaming • Cloud Service Providers (CSP) •
• Edge • Broadband • Telecom • 5G •
• CPU • GPU • Infrastructure • ISV • IHV •
• Platforms • OS • Metaverse • Security •
• Virtual Desktop Infrastructure (VDI) •
• Game Engines and Development •
• Collaboration Tools • SAAS •
• Cloud Gaming •
HOW TRIPARITY COULD SUPPORT YOUR NEEDS
- In 2016, TIFCA accurately predicted that virtual reality HMD sales would be correlated with qualified GPU sales
- High demand use cases like artificial intelligence, VDI / DAAS, and cloud gaming could be pushing our cloud resources to their limits
- Some end-user experiences and use cases operate on rationed resources
Are there use cases that could grow with more compute resource availability?
IS TRIPARITY IMPORTANT TO YOU?
While current computing models support the needs of many use cases, some are pushing the limits of our ecosystem and its ability to scale fast enough to meet demand.
Use cases that meet all of the following criteria are strong motivators for a hybrid client-cloud framework like The Triparity Initiative.
IF YOU ANSWER YES TO ALL, IT IS
- Applications are heavily cloud computing dependent
- Use case growth and market size needs enough cloud resources
- Cloud resources scaling could be challenging because of technological, manufacturing, business, or environmental factors
- There is evidence that compute resource capacity has been met, or its user experiences and capabilities are being rationed
WHAT IT IS
Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Popular examples of artificial intelligence are exemplified with expert systems, natural language processing, and “on request” digital artistry.
COMPUTING NEEDS
- Examples of AI use cases that require high-end cloud GPUs include language models, artistry, research and development, fluid dynamics, and more.
- Training models need large amounts of storage and scaled access to that storage
Did you know that a single AI language model installation could require 10,000 or more high-end cloud GPUs?
That’s about 5% of all the cloud GPUs sold each year, and only some of them are qualified.
CHALLENGES
Enough data center GPUs are needed for AI use cases and access. Availability and scalability are affected by:
- Hardware complexity
- Stringent reliability standards
- Potential electricity and cooling demands
- Refresh cycles ranging from 18 months to three years
OBSERVATIONS / EVIDENCE
- Limited access to artificial intelligence resources
- Growing costs of artificial intelligence services
- Stoppage or reduction of AI training model sizes
- Artificial intelligence is a mass market industry sector
- AI has a disproportionate need for high-end cloud GPUs
- AI could contribute to data center chip shortages and resource limitations
Could the needs of AI affect the operational costs of other industry sectors?
Could client device CPUs and GPUs contribute to the resource pool of artificial intelligence?
WHY THIS MATTERS
- AI companies are facing high customer demand
- AI should have a wide audience and use case potential
- AI progress should be accelerated by available computing resources
- AI services could affect resource availability and operational costs for other use cases
WHAT IF TRIPARITY WAS ACHIEVED?
- Client devices could contribute to the AI resource pool
- Possible availability of more AI resources for more users
- Could add viability to more AI companies and services
- Could encourage new product sales across client, cloud, and network
WHAT IT IS
Virtualized Desktop Infrastructure (VDI) is a cloud computing setup that streams virtual or remote desktops to users over the Internet or a LAN. The end user can access these desktops through client devices like PCs and mobile devices. The client device is usually treated as a “dumb terminal” with limited processing requirements. Desktop as a Service (DAAS) is similar, except the cloud portion is handled as a subscription service with all network traffic delivered through the Internet.
COMPUTING NEEDS
- Streams remote desktop instances to individual client devices
- Depending on processing requirements, could use a full GPU in the cloud for each instance
- Remote storage
- Local client device peripheral compatibility (e.g. mouse, keyboard, storage devices, etc.)
- While there could be a performance difference, a CPU can do everything a GPU can.
- A VDI / DAAS instance can place about 30% of its responsibility on a cloud CPU, and 70% on a cloud GPU
- There are approximately 200,000 cloud GPUs and 120-130 million cloud CPUs sold each year
- It is not yet possible to dynamically transition resource needs between CPUs and GPUs in a data center
Could more users be served by improving the way CPUs and GPUs are leveraged?
CHALLENGES
- Collaboration between VDI / DAAS users could be too processor intensive to do well
- Remote work / hybrid work is here to stay. Companies are supporting 2+ remote work days which require VDI / DAAS solutions. Computing needs are expected to grow.
- VDI / DAAS needs a computing model that supports legacy applications
- VDI /DAAS could be relying on scarce GPU resources when available CPUs could carry the full load
OBSERVATIONS / EVIDENCE
- Unexpected necessity to do client device hardware upgrades to meet employee needs during pandemic
- DAAS user experiences shared within TIFCA
Did you know that full transitions to VDI and DAAS solutions in the workplace could extend PC device refresh cycles from five years to eight years?
That’s three years of revenue that could go to innovations and product sales for IHVs, OEMs, PC makers, and more.
WHY THIS MATTERS
- VDI / DAAS needs to remain affordable if demand for cloud resources grow
- VDI / DAAS should be supportive of product sales for new PC and mobile client devices
- Computing ecosystems should be prepared for the growing needs of remote work and play applications
- Positive user experiences and capabilities contribute to new product sales
WHAT IF TRIPARITY WAS ACHIEVED?
- Client devices could contribute to processing requirements
- Collaboration could work better for more users
- There could be more scalability as fewer cloud resources would be required at once
- Client devices may be able to add features and capabilities beyond what the cloud service can do on its own