为什么这个程序显示以下输出?
#include <bitset>
...
{
std::bitset<8> b1(01100100); std::cout<<b1<<std::endl;
std::bitset<8> b2(11111111); std::cout<<b2<<std::endl; //see, this variable
//has been assigned
//the value 11111111
//whereas, during
//execution, it takes
//the value 11000111
std::cout << "b1 & b2: " << (b1 & b2) << '\n';
std::cout << "b1 | b2: " << (b1 | b2) << '\n';
std::cout << "b1 ^ b2: " << (b1 ^ b2) << '\n';
}
这是输出:
01000000
11000111
b1 & b2: 01000000
b1 | b2: 11000111
b1 ^ b2: 10000111
首先,我认为头文件有问题(我使用的是MinGW),所以我
使用 MSVCC 检查。但它也表明了同样的事情。请帮忙。
Despite the appearance, the 11111111
is decimal. The binary representation of 11111111
10 is 101010011000101011000111
2. Upon construction, std::bitset<8>
takes the eight least significant bits of that: 11000111
2.
The first case is similar except the 01100100
is octal (due to the leading zero). The same number expressed in binary is 1001000000001000000
2.
One way to represent a bitset with a value of 11111111
2 is std::bitset<8> b1(0xff)
.
或者,您可以从二进制字符串构造位集:
std::bitset<8> b1(std::string("01100100"));
std::bitset<8> b2(std::string("11111111"));
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)