Programing

(x ^ 0x1)! = 0 무엇을 의미합니까?

lottogame 2020. 5. 17. 10:27
반응형

(x ^ 0x1)! = 0 무엇을 의미합니까?


다음 코드 스 니펫을 발견했습니다.

if( 0 != ( x ^ 0x1 ) )
     encode( x, m );

무슨 x ^ 0x1뜻입니까? 이것이 표준 기술입니까?


XOR 연산 ( x ^ 0x1)은 비트 0을 반전시킵니다. 따라서이 표현식은 x의 비트 0이 0이거나 x의 다른 비트가 1이면 표현식이 참임을 의미합니다.

반대로 x == 1이면 표현식이 false입니다.

따라서 테스트는 다음과 같습니다.

if (x != 1)

그러므로 (논쟁 적으로) 불필요하게 난독 화된다.


  • ^비트 XOR 연산입니다
  • 0x1이다 1진수 표기법으로
  • x ^ 0x1의 마지막 비트를 뒤집습니다 x(확실하지 않은 경우 위 링크의 XOR 진리표 참조).

따라서 조건 (0 != ( x ^ 0x1 ))x1보다 크거나 마지막 비트 x가 0 이면 조건 이 참 이됩니다. 조건이 거짓이 될 값으로 x == 1 만 남습니다. 따라서

if (x != 1)

PS Hell은 그러한 간단한 조건을 구현하는 방법을 추가 할 수 있습니다. 하지마 복잡한 코드를 작성해야하는 경우 주석을 남기십시오 . 당신을 간청합니다.


이것은 지나치게 단순화 된 설명처럼 보이지만, 누군가가 천천히 살펴보고 싶다면 아래에 있습니다.

^c, c ++ 및 c # 비트 XOR 연산자입니다.

비트 XOR은 길이가 같은 두 개의 비트 패턴을 사용하여 각 해당 비트 쌍에 대해 논리 배타적 OR 연산을 수행합니다.

배타적 OR은 두 입력이 다를 때마다 true를 출력하는 논리 연산입니다 (하나는 true, 다른 하나는 false).

진리표의 XOR B :

a           b        a xor b
----------------------------
1           1           0
1           0           1
0           1           1
0           0           0

따라서 0 == ( x ^ 0x1 )이진 수준 표현을 보여 드리겠습니다 .

             what? xxxxxxxx (8 bits)
               xor 00000001 (hex 0x1 or 0x01, decimal 1)    
             gives 00000000
---------------------------
the only answer is 00000001

그래서:

   0 == ( x ^ 0x1 )    =>    x == 1
   0 != ( x ^ 0x1 )    =>    x != 1

배타적 OR (XOR) 연산자입니다. 그것이 어떻게 작동하는지 이해하려면이 간단한 코드를 실행할 수 있습니다

    std::cout << "0x0 ^ 0x0 = " << ( 0x0 ^ 0x0 ) << std::endl;
    std::cout << "0x0 ^ 0x1 = " << ( 0x0 ^ 0x1 ) << std::endl;
    std::cout << "0x1 ^ 0x0 = " << ( 0x1 ^ 0x0 ) << std::endl;
    std::cout << "0x1 ^ 0x1 = " << ( 0x1 ^ 0x1 ) << std::endl;

출력은

0x0 ^ 0x0 = 0
0x0 ^ 0x1 = 1
0x1 ^ 0x0 = 1
0x1 ^ 0x1 = 0

이 표현은

0 != ( x ^ 0x1 )

x! = 0x1 일 때만 참이됩니다.

x 자체는 변경되지 않습니다. x가 0인지 1인지 만 확인합니다.이 rxpression은

if ( x != 0x1 )

그것은 검사 x실제로되지 0x1... xor보내고 x함께 0x1하면 0으로됩니다 만 x입니다 0x1...이 대부분 어셈블리 언어로 사용하는 오래 된 트릭


^연산자는 비트 단위 XOR된다. 그리고 0x1숫자 1는 16 진 상수로 쓰여집니다.

So, x ^ 0x1 evaluates to a new value that is the same as x, but with the least significant bit flipped.

The code does nothing more than compare x with 1, in a very convoluted and obscure fashion.


The xor (exclusive or) operator is most commonly used to invert one or more bits. The operation is to ask if excactly one of the bits are one, this leads to the following truth table (A and B are inputs, Y is output):

A    B    Y
0    0    0
0    1    1
1    0    1
1    1    0

Now the purpose of this code seems to be to check if excatly the last bit is 1, and the others are 0, this equals if ( x != 1 ). The reason for this obscure method might be that prior bit manipulation techniques have been used and perhaps is used other places in the program.


^ is bitwise xor operator in c. In your case x is xor'ed with 1. for example x has the value 10, then 10d ^ 1d ===> 1010b ^ 0001b = 1011b, 1011b == 11d so condition becomes true.


The bitwise test seems to be a deliberate obfuscation, but if the underlying data is corporate data from an IBM mainframe system it may simply be that the code was written to reflect the original documentation. IBM data formats go back to the 1960's and frequently encode flags as single bits within a word to save storage. As the formats were modified, flag bytes were added at the end of the existing records to maintain backwards compatibility. The documentation for an SMF record, for example, might show the assembly language code to test three individual bits within three different words in a single record to decide that the data was an input file. I know much less about TCP/IP internals, but you may find bit flags there, as well.


The operator ^ is the bitwise-xor (see &, | ). The result for a bit pair is,

0 ^ 0 == 0
0 ^ 1 == 1
1 ^ 0 == 1
1 ^ 1 == 0

So the expression,

( x ^ 0x1 )

inverts/flips the 0th bit of x (leaving other bits unchanged).

Consider whether x can have values besides 0x0 and 0x1? When x is a single bit field, it can have only values 0x0 and 0x1, but when x is an int (char/short/long/etc), bits besides bit0 can affect the result of the expression.

The expression as given allows bits beside bit0 to affect the result,

if ( 0 != ( x ^ 0x1 ) )

Which has equivalent truthiness as this (simpler) expression,

if ( x ^ 0x1 )

Note that this expression would examine only bit0,

if( 0x1 & ( x ^ 0x1 ) )

So the expression as presented is really combining two expression checks,

if( ( x & ~0x1 )  //look at all bits besides bit0
||  ( x ^ 0x1 ) ) //combine with the xor expression for bit0

Did the author intend to only check bit0, and have meant to use this expression,

if( 0x1 & ( x ^ 0x1 ) )

Or did the author intend to comingle the values for bit1-bitN and the xor of bit0?


I'm adding a new answer because no one really explained how to get the answer intuitively.

The inverse of + is -.
The inverse of ^ is ^.

How do you solve 0 != x - 1 for x? You + 1 to both sides: 0 + 1 != x - 1 + 11 != x.
How do you solve 0 != x ^ 1 for x? You ^ 1 to both sides: 0 ^ 1 != x ^ 1 ^ 11 != x.


I'd guess that there are other bits or bit-field values in x, and this is intended to test that only the low-order bit is set. In the context, I'd guess that this is the default, and that therefore encoding of this and some related m (probably more expensive to encode) can be skipped, because they must both be the default value, initialized in a constructor or similar.

Somehow the decoder must be able to infer that these values are missing. If they are at the end of some structure, it may be communicated via a length value that's always present.


The XOR is useful in C# flag enum. To remove single flag from enum value it is necessary to use xor operator (reference here)

Example:

[Flags]
enum FlagTest { None 0x0, Test1 0x1, Test2 0x2, Test3 0x4}

FlagTest test = FlagTest.Test2 | FlagTest.Test3;
Console.WriteLine(test); //Out: FlagTest.Test2 | FlagTest.Test3
test = test ^ FlagTest.Test2;
Console.WriteLine(test); //Out: FlagTest.Test3

There are a lot of good answers but I like to think of it in a simpler way.

if ( 0 != ( x ^ 0x1 ) );

First of all. An if statement is only false if the argument is zero. This means that comparing not equal to zero is pointless.

if ( a != 0 );
// Same as
if ( a );

So that leaves us with:

if ( x ^ 0x1 );

An XOR with one. What an XOR does is essentially detect bits that are different. So, if all the bits are the same it will return 0. Since 0 is false, the only time it will return false is if all of the bits are the same. So it will be false if the arguments are the same, true if they are different...just like the not equal to operator.

if ( x != 0x1 );

If fact, the only difference between the two is that != will return 0 or 1, while ^ will return any number, but the truthyness of the result will always be the same. An easy way to think about it is.

(b != c) === !!(b ^ c) // for all b and c

The final "simplification" is converting 0x1 to decimal which is 1. Therefore your statement is equivalent to:

if ( x != 1 )

^ is a bitwise XOR operator

If x = 1

          00000001   (x)       (decimal 1)
          00000001   (0x1)     (decimal 1)
XOR       00000000   (0x0)     (decimal 0)

here 0 == ( x ^ 0x1 )

If x = 0

          00000000   (x)       (decimal 0)
          00000001   (0x1)     (decimal 1)
XOR       00000001   (0x1)     (decimal 0)

here 0 != ( x ^ 0x1 )

The truth table of a xor b:

a           b        a xor b
----------------------------
1           1           0
1           0           1
0           1           1
0           0           0

The code simply means


The standard technique that might be being used, here, is to repeat an idiom as it appears in surrounding context for clarity, rather than to obfuscate it by replacing it with an idiom that is arithmetically simpler but contextually meaningless.

The surrounding code may make frequent reference to (x ^ 1), or the test may be asking "if bit 0 was the other way around, would this bit-mask be empty?".

Given that the condition causes something to be encode()ed, it may be that in context the default state of bit 0 has been inverted by other factors, and we need only encode extra information if any of the bits deviate from their default (normally all-zero).

If you take the expression out of context and ask what it does, you overlook the underlying intention. You might just as well look at the assembly output from the compiler and see that it simply does a direct equality comparison with 1.


As I see the answers so far miss a simple rule for handling XORs. Without going into details what ^ and 0x mean (and if, and != etc), the expression 0 != (x^1) can be reworked as follows using the fact that (a^a)==0:

0 != (x^1) <=> [xor left and right side by 1]
(0^1) != (x^1^1) <=>
1 != x

참고URL : https://stackoverflow.com/questions/20679642/what-does-x-0x1-0-mean

반응형