1 Star 0 Fork 0

zhangdaolong/speccpu2006-config-flags

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
文件
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
HPE-Platform-Flags-Intel-V1.2-SKX-revH.xml 21.36 KB
一键复制 编辑 原始数据 按行查看 历史
zhangdaolong 提交于 2024-04-07 09:28 . add flag file
<?xml version="1.0"?>
<!DOCTYPE flagsdescription SYSTEM "http://www.spec.org/dtd/cpuflags2.dtd">
<flagsdescription>
<filename>HPE-Platform-Flags-Intel-V1.2-SKX-revH</filename>
<title>SPEC CPU2006/SPEC CPU2017 Platform Settings for HPE ProLiant Intel-based systems</title>
<os_tuning>
<![CDATA[
<p><b>OS Tuning</b></p>
<p><b>ulimit</b>:</p>
<p>Used to set user limits of system-wide resources. Provides control over resources available to the shell and processes started by it. Some common ulimit commands may include:</p>
<ul>
<li><b>ulimit -s [n | unlimited]</b>: Set the stack size to <b>n</b> kbytes, or <b>unlimited</b> to allow the stack size to grow without limit.</li>
<li><b>ulimit -l (number)</b>: Set the maximum size that can be locked into memory.</li>
</ul>
<p><b>Disabling Linux services</b>:</p>
<p>Certain Linux services may be disabled to minimize tasks that may consume CPU cycles.</p>
<p><b>irqbalance</b>:</p>
<p>Disabled through "service irqbalance stop". Depending on the workload involved, the irqbalance service reassigns various IRQ's to system CPUs. Though this service might help in some situations, disabling it can also help environments which need to minimize or eliminate latency to more quickly respond to events.</p>
<p><b>Performance Governors (Linux)</b>:</p>
<p>In-kernel CPU frequency governors are pre-configured power schemes for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU utilization to allow for power savings while not sacrificing performance.</p>
<p>Other options beside a generic performance governor can be set, such as the perf-bias:</p>
<p><b>--perf-bias, -b</b></p>
<p>On supported Intel processors, this option sets a register which allows the cpupower utility (or other software/firmware) to set a policy that controls the relative importance of performance versus energy savings to the processor. The range of valid numbers is 0-15, where 0 is maximum performance and 15 is maximum energy efficiency.</p>
<p>The processor uses this information in model-specific ways when it must select trade-offs between performance and energy efficiency. This policy hint does not supersede Processor Performance states (P-states) or CPU Idle power states (C-states), but allows software to have influence where it would otherwise be unable to express a preference.</p>
<p>On many Linux systems one can set the perf-bias for all CPUs through the cpupower utility with one of the following commands:</p>
<ul>
<li>"cpupower -c all set -b 0"</li>
<li>"cpupower -c all set --perf-bias 0"</li>
<li>"cpupower set -b 0"</li>
</ul>
<p><b>Tuning Kernel parameters</b>:</p>
<p>The following Linux Kernel parameters were tuned to better optimize performance of some areas of the system:</p>
<ul>
<li><b>dirty_background_ratio</b>: Set through "echo 40 > /proc/sys/vm/dirty_background_ratio". This setting can help Linux disk caching and performance by setting the percentage of system memory that can be filled with dirty pages.</li>
<li><b>dirty_ratio</b>: Set through "echo 40 > /proc/sys/vm/dirty_ratio". This setting is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk.</li>
<li><b>swappiness</b>: The swappiness value can range from 1 to 100. A value of 100 will cause the kernel to swap out inactive processes frequently in favor of file system performance, resulting in large disk cache sizes. A value of 1 tells the kernel to only swap processes to disk if absolutely necessary. This can be set through a command like "echo 1 > /proc/sys/vm/swappiness"</li>
<li><b>ksm/sleep_millisecs</b>: Set through "echo 200 > /sys/kernel/mm/ksm/sleep_millisecs". This setting controls how many milliseconds the ksmd (KSM daeomn) should sleep before the next scan.</li>
<li><b>khugepaged/scan_sleep_millisecs</b>: Set through "echo 50000 > /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs". This setting controls how many milliseconds to wait in khugepaged is there is a hugepage allocation failure to throttle the next allocation attempt.</li>
<li><b>numa_balancing</b>: Disabled through "echo 0 > /proc/sys/kernel/numa_balancing". This feature will automatically migrate data on demand so memory nodes are aligned to the local CPU that is accessing data. Depending on the workload involved, enabling this can boost the performance if the workload performs well on NUMA hardware. If the workload is statically set to balance between nodes, then this service may not provide a benefit.</li>
<li><b>Zone Reclaim Mode</b>: Zone reclaim allows the reclaiming of pages from a zone if the number of free pages falls below a watermark even if other zones still have enough pages available. Reclaiming a page can be more beneficial than taking the performance penalties that are associated with allocating a page on a remote zone, especially for NUMA machines. To tell the kernel to free local node memory rather than grabbing free memory from remote nodes, use a command like "echo 1 > /proc/sys/vm/zone_reclaim_mode"</li>
<li><b>max_map_count-n</b>: The maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries.</li>
</ul>
<p><b>tuned-adm</b>:</p>
<p>The tuned-adm tool is a commandline interface for switching between different tuning profiles available to the tuned tuning daeomn available in supported Linux distros. The default configuration file is located in /etc/tuned.conf and the supported profiles can be found in /etc/tune-profiles.</p>
<p>Some profiles that may be available by default include: default, desktop-powersave, server-powersave, laptop-ac-powersave, laptop-battery-powersave, spindown-disk, throughput-performance, latency-performance, enterprise-storage</p>
<p>To set a profile, one can issue the command "tuned-adm profile (profile_name)". Here are details about relevant profiles. </p>
<ul>
<li><b>throughput-performance</b>: Server profile for typical throughput tuning. This profile disables tuned and ktune power saving features, enables sysctl settings that may improve disk and network IO throughphut performance, switches to the deadline scheduler, and sets the CPU governor to performance.</li>
<li><b>latency-performance</b>: Server profile for typical latency tuning. This profile disables tuned and ktune power saving features, enables the deadline IO scheduler, and sets the CPU governor to performance.</li>
<li><b>enterprise-storage</b>: Server profile to high disk throughput tuning. This profile disables tuned and ktune power saving features, enables the deadline IO scheduler, enables hugepages and disables disk barriers, increases disk readahead values, and sets the CPU governor to performance</li>
</ul>
<p><b>Transparent Huge Pages (THP)</b>:</p>
<p>THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. THP is designed to hide much of the complexity in using huge pages from system administrators and developers, as normal huge pages must be assigned at boot time, can be difficult to manage manually, and often require significant changes to code in order to be used effectively. Transparent Hugepages increase the memory page size from 4 kilobytes to 2 megabytes. Transparent Hugepages provide significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default.</p>
<p><b>Linux Huge Page settings</b>:</p>
<p>If you need finer control and manually set the Huge Pages you can follow the below steps:</p>
<ul>
<li>Create a mount point for the huge pages: "mkdir /mnt/hugepages"</li>
<li>The huge page file system needs to be mounted when the systems reboots. Add the following to a system boot configuration file before any services are started: "mount -t hugetlbfs nodev /mnt/hugepages"</li>
<li>Set vm/nr_hugepages=N in your /etc/sysctl.conf file where N is the maximum number of pages the system may allocate.</li>
<li>Reboot to have the changes take effect.</li>
</ul>
<p>Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt</p>
]]>
</os_tuning>
<firmware>
<![CDATA[
<p><b>Firmware Settings</b></p>
<p>One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.</p>
<p><b>Intel Hyper-Threading (Default = Enabled):</b></p>
<p>This feature allows enabling or disabling of logical processor cores on processors supporting Intel Hyper-Threading (HT). When enabled, each physical processor core operates as two logical processor cores. When disabled, each physical core operates as only one logical processor core. Enabling this option can improve overall performance for applications that benefit from a higher processor core count.</p>
<p><b>Thermal Configuration (Default = Optimal Cooling):</b></p>
<p>This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:</p>
<ul>
<li><b>Optimal Cooling</b>: Provides the most efficient solution by configuring fan speeds to the minimum required to provide adequate cooling.</li>
<li><b>Increased Cooling</b>: Will run fans at higher speeds to provide additional cooling. Increased Cooling should be selected when non-HPE storage controllers are cabled to the embedded hard drive cage, or if the system is experiencing thermal issues that cannot be resolved in another manner.</li>
<li><b>Maximum Cooling</b>: Will provide the maximum cooling available by this platform.</li>
</ul>
<p><b>Last Level Cache (LLC) Dead Line Allocation (Default = Enabled):</b></p>
<p>In the Skylake cache scheme, mid-level cache (MLC) evictions are filled into the last level cache (LLC). If a line is evicted from the MLC to the LLC, the Skylake core can flag the evicted MLC lines as "dead". This means that the lines are not likely to be read again. This option allows dead lines to be dropped and never fill the LLC if the option is disabled. Values for this BIOS option can be:</p>
<ul>
<li><b>Disabled</b>: Disabling this option can save space in the LLC by never filling dead lines into the LLC. This can and prevent useful data from being evicted.</li>
<li><b>Enabled</b>: Opportunistically fill dead lines in LLC, if space is available.</li>
</ul>
<p><b>Stale A to S (Default = Disabled):</b></p>
<p>The in-memory directory has three states: invalid (I), snoopAll (A), and shared (S). Invalid (I) state means the data is clean and does not exist in any other socket`s cache. The snoopAll (A) state means the data may exist in another socket in exclusive or modified state. Shared (S) state means the data is clean and may be shared across one or more socket`s caches. When doing a read to memory, if the directory line is in the A state we must snoop all the other sockets because another socket may have the line in modified state. If this is the case, the snoop will return the modified data. However, it may be the case that a line is read in A state and all the snoops come back a miss. This can happen if another socket read the line earlier and then silently dropped it from its cache without modifying it. Values for this BIOS option can be:</p>
<ul>
<li><b>Disabled</b>: Disabling this option allows the feature to process memory directories as described above.</li>
<li><b>Enabled</b>: In the situation where a line in A state returns only snoop misses, the line will transition to S state. That way, subsequent reads to the line will encounter it in S state and not have to snoop, saving latency and snoop bandwidth.</li>
</ul>
<p>Stale A to S may be beneficial in a workload where there are many cross-socket reads.</p>
<p><b>Last Level Cache (LLC) Prefetch (Default = Disabled):</b></p>
<p>This option configures the processor Last Level Cache (LLC) prefetch feature as a result of the non-inclusive cache architecture. The LLC prefetcher exists on top of other prefetchers that that can prefetch data in the core data cache unit (DCU) and mid-level cache(MLC). In some cases, setting this option to disabled can improve performance. Typically, setting this option to enable provides better performance. Values for this BIOS option can be:</p>
<ul>
<li><b>Disabled</b>: Disabling this option can forces data to fill the MLC before prefetching data to the LLC.</li>
<li><b>Enabled</b>: Giving the core prefetcher the ability to prefetch data directly to the LLC without filling the MLC.</li>
</ul>
<p><b>NUMA Group Size Optimization (Default = Clustered):</b></p>
<p>This feature allows the user to configure how the BIOS reports the size of a NUMA node (number of logical processors), which assists the Operating System in grouping processors for application use (referred to as Kgroups). Values for this BIOS option can be:</p>
<ul>
<li><b>Clustered</b>: Might provide better performance for some workloads due to optimizing the resulting groups along NUMA boundaries.</li>
<li><b>Flat</b>: Might provide better performance for some workloads that cannot take advantage of processors spanning multiple groups. This setting would be necessary to help this class of applications utilize more logical processors.</li>
</ul>
<p><b>Sub-NUMA Clustering (SNC) (Default = Enabled):</b></p>
<p>SNC breaks up the last level cache (LLC) into disjoint clusters based on address range, with each cluster bound to a subset of the memory controllers in the system. SNC improves average latency to the LLC and memory. SNC is a replacement for the cluster on die (COD) feature found in previous processor families. For a multi-socketed system, all SNC clusters are mapped to unique NUMA domains. (See also IMC interleaving.) Values for this BIOS option can be:</p>
<ul>
<li><b>Disabled</b>: The LLC is treated as one cluster when this option is disabled</li>
<li><b>Enabled</b>: Utilizes LLC capacity more efficiently and reduces latency due to core/IMC proximity. This may provide performance improvement on NUMA-aware operating systems</li>
</ul>
<p><b>Xtended Prediciton Table (XPT) Prefetch (Default = Enabled):</b></p>
<p>This option configures the processor Xtended Prediciton Table (XPT) prefetch feature. The XPT prefetcher exists on top of other prefetchers that that can prefetch data in the core DCU, MLC, and LLC. The XPT prefetcher will issue a speculative DRAM read request in parallel to an LLC lookup. This prefetch bypasses the LLC, saving latency. In some cases, setting this option to disabled can improve performance. In some cases, setting this option to disabled can improve performance. Typically, setting this option to enable provides better performance. This option must be enabled when Sub-NUMA Clustering is enabled. Values for this BIOS option can be:</p>
<ul>
<li><b>Enabled</b>: Allows a read request sent to the LLC to speculatively issue a copy of the read to the memory controller requesting the prefetch.</li>
<li><b>Disabled</b>: Does not allow the LLC to speculatively issue copies of reads. Disabling this will also disables Sub-NUMA Cluster (SNC).</li>
</ul>
<p><b>Uncore Frequency Scaling (Default = Auto):</b></p>
<p>This option controls the frequency scaling of the processor`s internal buses (the uncore). Values for this BIOS option can be:</p>
<ul>
<li><b>Auto</b>: Enabled the processor to dynamically change frequencies based on the workload.</li>
<li><b>Maximum</b>: Enables tuning for latency.</li>
<li><b>Minimum</b>: Enables tuning for power consumption.</li>
</ul>
<p><b>Workload Profile (Default = General Power Efficient Compute):</b></p>
<p>This option allows a user to choose one workload profile that best fits the user`s needs. The workload profiles control many power and performance settings that are relevant to general workload areas. Values for this BIOS option can be:</p>
<ul>
<li>General Power Efficient Compute, General Peak Frequency Compute, General Throughput Compute, Virtualization - Power Efficient, Virtualization - Max Performance, Low Latency, Mission Critical, Transaction Application Processing, High Performance Compute (HPC), Decision Support, Graphic Processing, I/O Throughput, or Custom.</li>
<li>Setting the Workload Profile to any option not named Custom allows the server to automatically configure various BIOS settings. These BIOS settings control many power and performance settings that are relevant to general workload areas that fit the profile name.</li>
<li>Setting the Workload Profile to Custom allows a user to set any BIOS setting to any supported setting. Choosing Custom after selecting an initial profile does not change the settings controlled by the profile previously selected without user intervention.</li>
<li>Further technical description about what settings a Workload Profile changes and the types of workloads that a profile may be suitable for can be found through the HPE UEFI Workload-based Performance and Tuning Guide - https://support.hpe.com/hpsc/doc/public/display?docId=a00016408en_us</li>
</ul>
<p><b>Minimum Processor Idle Power Core C-State (Default = C6 State):</b></p>
<p>This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle power state (C-state) that the operating system uses. The higher the C-state, the lower the power usage of that idle state (C6 is the lowest power idle state supported by the processor). Values for this setting can be:</p>
<ul>
<li><b>C6 State</b>: While in C6, the core PLLs are turned off, the core caches are flushed and the core state is saved to the Last Level Cache. Power Gates are used to reduce power consumption to close to zero. C6 is considered an inactive core.</li>
<li><b>C1E State</b>: C1E is defined as the enhanced halt state. While in C1E no instructions are being executed. C1E considered an active core.</li>
<li><b>No C-states</b>: No C-states is defined as C0, which is defined as the active state. While in C0, instructions are being executed by the core.</li>
</ul>
<p><b>Minimum Processor Idle Power Package C-State (Default = Package C6 (retention) State):</b></p>
<p>This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This feature selects the processor's lowest idle package power state (C-state) that is enabled. The processor will automatically transition into the package C-states based on the Core C-states, in which cores on the processor have transitioned. The higher the package C-state, the lower the power usage of that idle package state. Package C6 (retention) is the lowest power idle package state supported by the processor). Values for this setting can be:</p>
<ul>
<li><b>Package C6 (retention) State</b>: All cores have saved their architectural state and have had their core voltages reduced to zero volts. The LLC retains context, but no accesses can be made to the LLC in this state, the cores must break out to the internal state package C2 for snoops to occur.</li>
<li><b>Package C6 (non-retention) State</b>: All cores have saved their architectural state and have had their core voltages reduced to zero volts. The LLC does not retain context, and no accesses can be made to the LLC in this state, the cores must break out to the internal state package C2 for snoops to occur.</li>
<li><b>No Package State</b>: All cores are in an active state and have not entered any power saving state.</li>
</ul>
<p><b>Energy/Performance Bias (Default = Balanced Performance):</b></p>
<p>This option can only be configured if the Workload Profile is set to Custom, or this option is not a dependent value for the Workload Profile. This option configures several processor subsystems to optimize the processor's performance and power usage. Values for this BIOS setting can be:</p>
<ul>
<li><b>Balanced Performance</b>: Provides optimum performance efficiency and is recommended for most environments.</li>
<li><b>Maximum Performance</b>: Should be used for environments that require the highest performance and lowest latency but are not sensitive to power consumption.</li>
<li><b>Balanced Power</b>: Similar to Balanced Performance but this option prioritizes more power savings at the sacrifice of performance.</li>
<li><b>Power Savings Mode</b>: Should only be used in environments that are power sensitive and are willing to accept reduced performance.</li>
</ul>
<p><b>AHS PCI Logging Level (Default = Verbose Logging):</b></p>
<p>This option allows the AHS PCI Logging size to be changed. This is a boot time option that should have no effect on run time performance. Values for this BIOS setting can be:</p>
<ul>
<li><b>Verbose Logging</b>: Allows 960 bytes to be logged.</li>
<li><b>Minimal Logging</b>: Allows 64 bytes to be logged.</li>
</ul>
<p><b>Memory Patrol Scrubbing (Default = Enabled):</b></p>
<p>This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:</p>
<ul>
<li><b>Enabled</b>: Correction of soft memory errors can occur during runtime.</li>
<li><b>Disabled</b>: Soft memory error correction is turned off during runtime.</li>
</ul>
<p><b>Last updated January 9, 2018.</b></p>
]]>
</firmware>
</flagsdescription>
Loading...
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/zhangdaolong/speccpu2006-config-flags.git
[email protected]:zhangdaolong/speccpu2006-config-flags.git
zhangdaolong
speccpu2006-config-flags
speccpu2006-config-flags
master

搜索帮助