[HN Gopher] The new Clang _ExtInt feature provides exact bitwidt...
       ___________________________________________________________________
        
       The new Clang _ExtInt feature provides exact bitwidth integer types
        
       Author : daurnimator
       Score  : 152 points
       Date   : 2020-04-22 14:37 UTC (8 hours ago)
        
 (HTM) web link (blog.llvm.org)
 (TXT) w3m dump (blog.llvm.org)
        
       | nabla9 wrote:
       | I think most commonly used languages with and without standards,
       | C,C++, JavaScript/Wasm, Python, Java, etc. should standardize new
       | primitive type represetations together (with hardware people
       | included).
       | 
       | If you have different representations in different languages it
       | just creates unnecessary impedance mismatch. It would be better
       | for everyone if you could just pass these types from language to
       | language.
        
         | einpoklum wrote:
         | C++ can have arbitrary-width integers as library types; it
         | would not be that big of a deal IMHO. If `optional`, `variant`
         | and `any` (and maybe soon, `bit`) are not in the language
         | itself, no reason why n-bit-integer should be.
         | 
         | (Of course, this is written from the "we can jerry-rig the
         | existing language to do what you want" perspective with which
         | so much is achievable efficiently in C++.)
        
           | nabla9 wrote:
           | > no reason why n-bit-integer should be.
           | 
           | If standard is agreed, it could be pragma similar to calling
           | conventions.
        
           | hermitdev wrote:
           | Boost Multiprecision [0] is an example of such a library
           | type. It offers a compile-time arbitrarily wide integers
           | (with predefined types up to 1024 bits) and a C++ wrapper
           | around the GMP or MPIR libraries, which supports arbitrary
           | sizes at runtime (not sure how it's implemented, but probably
           | on top of an array of ints or BCD (binary-coded decimals)).
           | 
           | C++ has had `optional` and `variant` since (I think) C++11,
           | maybe 14. I don't think `any` made the cut. All of these
           | types originated (for C++ standardization) in Boost, as well.
           | I'd caution against using `any`, though. From personal
           | experience, the runtime overhead is quite high, and holding
           | any non-none type is a dynamic allocation. Performance is far
           | better with `variant` at the development cost of needing to
           | know all the types you're going to support at compile-time.
           | 
           | [0] https://www.boost.org/doc/libs/1_72_0/libs/multiprecision
           | /do...
        
       | ralusek wrote:
       | A lot of people don't know this, but `BigInt`s are supported in
       | modern JavaScript; integers of arbitrarily large precision.
       | 
       | Try in your browser console:                   2n ** 4096n
       | // output (might have to scroll right)         104438888141315250
       | 66917527107166243825799642490473837803842334832839539079715574568
       | 48826811934997558340890106714439262837987573438185793607263236087
       | 85136527794595697654370999834036159013438371831442807001185594622
       | 63763188393977127456723346843445866174968079087058037040712840487
       | 40118609114467977783598029006686938976881787785946905630190260940
       | 59957945343282346930302669644305902501597239986771421554169383555
       | 98852914863182379144344967340878118726394964751001890413490084170
       | 61675093668333850551032972088269550769983616369411933015213796825
       | 83718809183365675122131849284636812555022599830041234478486259567
       | 44921946170238065059132456108257318353800876086221028342701976982
       | 02313169017678006675195485079921636419370285375124784014907159135
       | 45998279051339961155179427110683113409058427288427979155484978295
       | 43235345170652232690613949059876930021229633956877828789484406160
       | 07412945674919823050571642377154816321380631045902916136926708342
       | 85644073044789997190178146576347322385026725305989979599609079946
       | 92017746248177184498674556592501783290704731194331655508075682218
       | 46571746373296884912819520317457002440926616910874148385078411929
       | 80452298185733897764810312608590300130241346718972667321649151113
       | 1602920781738033436090243804708340403154190336n
       | 
       | To use, just add `n` after the number as literal notation, or can
       | cast any Number x with BigInt(x). BigInts may only do operations
       | with other BigInts, so make sure to cast any Numbers where
       | applicable.
       | 
       | I know this is about C, I thought I'd just mention it, since many
       | people seem to be unaware of this.
        
         | justicz wrote:
         | Hm, does this work in Safari? https://caniuse.com/#feat=bigint
        
           | recursive wrote:
           | Safari is the new IE.
        
           | ralusek wrote:
           | Not yet, but I believe babel and others just
           | transpile/polyfill it by having it fall back on a string
           | arithmetic library for working with integers of arbitrary
           | precision.
        
             | recursive wrote:
             | That will never be totally reliable as 1) javascript is
             | dynamically typed 2) javascript doesn't support operator
             | overloading. Nonetheless, there are attempts.
             | 
             | https://www.npmjs.com/package/babel-plugin-transform-bigint
             | 
             | > Update: Now it can convert a code using BigInt into a
             | code using JSBI (https://github.com/GoogleChromeLabs/jsbi).
             | It will try to detect when an operator is used for bigints,
             | not numbers. _This will not work in many cases, so please
             | use JSBI directly only if you know, that the code works
             | only with bigints._
             | 
             | (emphasis mine)
        
           | jakear wrote:
           | "Syntax Error: No identifiers allowed directly after numeric
           | literal"
        
           | The_rationalist wrote:
           | Does X work in Safari can almost always be answered by 'no'
           | iff the feature is from a _recent_ (less than a decade) spec
        
         | waltpad wrote:
         | So clearly, if llvm is used as a backend for js, this feature
         | will come in handy.
         | 
         | On a side note, apparently, it will also be useful for the rust
         | folks, which has user implemented libraries to emulate C-like
         | bitfields, and implement bigints,
         | 
         | So this work has promising outcomes.
        
       | Someone wrote:
       | _"Likewise, if a Binary expression involves operands which are
       | both _ExtInt, rather than promoting both operands to int the
       | narrower operand will be promoted to match the size of the wider
       | operand, and the result of the binary operation is the wider
       | type."_
       | 
       | I don't understand that choice. The result should be of the wider
       | type, yes, but, for example, multiplying a __ExtInt(1)_ by a
       | __ExtInt(1000)_ should take less hardware than multiplying two
       | _ExtInt(1000)_ s. So, why promote the narrower one to the wider
       | type?
        
       | rightbyte wrote:
       | I feel this better be compiler extensions. Writing FPGA code has
       | so much specialness anyway.
        
       | segfaultbuserr wrote:
       | I think it's funny. C was originally invented in an era when
       | machines didn't have a standard integer size, 36-bit
       | architectures were at their heydays, so C integers - char, short,
       | int, and long - only have a guaranteed minimum size that could be
       | taken for granted, but nothing else, to achieve portability. But
       | after the computers of world have converged to multiple-of-8-bit
       | integers, the inability to specify a particular size of an
       | integer become an issue. As a result, in modern C programming,
       | the standard is to use uint8_t, uint16_t, uint32_t, etc., defined
       | in <stdint.h>, C's inherent support of different integer sizes
       | are basically abandoned - no one needs it anymore, and it only
       | creates confusions in practice, especially in the bitmasking and
       | bitshifting world of low-level programming. Now, if N-bit
       | integers are introduced to C, it's kind of a negation-of-the-
       | negation, and we complete a full cycle - the ability to work on
       | non-multiple-of-8-bit integers will come back (although the
       | original integer-size independence and portability will not come
       | back).
        
         | AaronFriel wrote:
         | There's something fundamentally different and mistaken about
         | C's original implementation of variable integer sizes though.
         | 
         | People often describe C as "portable assembly", but despite
         | this, integer sizes varying on different platforms results in
         | non-portability of anything those programs _produce_. That is,
         | a "file", or bit stream (not byte stream!) produced by one
         | machine may be incompatible with another. The original integer-
         | size independence is decidedly _not portable_.
         | 
         | That was probably less of a problem when it was rare to send
         | data from one physical machine to another machine, let alone
         | one of another type. But now the world is inter-net-worked and
         | we have all sorts of machines talking to each other all the
         | time.
         | 
         | Making the interfaces explicit reduces errors. These days we
         | now even have virtual machines and programs running at
         | different bit widths on the same machine, and emulated machines
         | on the same machine running different ISAs!
         | 
         | I'm also part of what I'm sure is a small number of users who
         | believe using "usize" should be a lint error manually
         | overridden on Rust and also thinks endianness should also be
         | explicit. Heck, it should be a compiler error to write a struct
         | to a socket if it contains any non-explicit values!
        
         | GTP wrote:
         | Why portability will not come back? Also when the ability of
         | working on non-multiple-of-8-bit integers was lost?
        
           | gumby wrote:
           | AFAIK the last machine in widespread production that handled
           | multi-length integers was the PDP-10/20 and its clones which
           | essentially died around 1984. I say "around" because though
           | DEC canceled the 20 line, some clones remained (that was
           | Cisco's original business plan, for example)
        
             | ajuc wrote:
             | There was a series of Polish mineframes called Odra with
             | 24bit integers which also died out in 80s essentially, but
             | some of them were still used till 2010 in some specialized
             | railway station software, and there was a short series of
             | faster replacement processors for them made in late
             | 80s-early 90s called SKOK.
             | 
             | Of course they weren't "widespread" for most meanings of
             | that word :)
        
           | segfaultbuserr wrote:
           | Well, it's not lost in C itself. But in the practice of
           | modern C programming, it's often sacrificed in favor of using
           | integers of exact sizes (uintN_t), and many programs perform
           | bitwise operation by assuming an exact size of integer. By
           | C99, they are guaranteed to have the same number N of bits
           | across all implementations, and they are included only if the
           | implementation supports it. So the programs using them is
           | standard C, but not 100% portable, there is no requirement in
           | C to implement exact-width integers.
           | 
           | Although modification of most programs shouldn't be difficult
           | (there is uint_leastN_t), also, C compilers can be modified
           | to treat extra bits as if they don't exist to allow existing
           | programs to work again.
        
             | pjc50 wrote:
             | This is the other way round though; code using "int" is a
             | disaster for portability, because you don't know what
             | you're actually getting. So people use uintN_t to get
             | something specific which behaves the same way on different
             | platforms - i.e. portable.
             | 
             | You can always #define or typedef uintN_t to a machine
             | type. You can't re-typedef int. You _can_ #define it, but
             | people will hate you.
        
         | m463 wrote:
         | Why not just go to values?
         | 
         | Some languages like Ada allow a type that say goes from -273 to
         | 600.
        
           | Gibbon1 wrote:
           | I would very much prefer this and the ability to spec what
           | happens on overflow.
           | 
           | There are a lot of arguments back and forth because there
           | actually is no 'right way' to handle overflow.
        
           | guerby wrote:
           | And GNAT (GCC Ada front-end) will use a biased representation
           | for range type when packing tight:                 with
           | Ada.Text_IO; use Ada.Text_IO;       procedure T is
           | type T1 is range 16..19;        type T2 is range -7..0;
           | type R is record           A : T1;           B,C : T2;
           | end record;        for  R use         record            A at
           | 0 range  0 .. 1;            B at 0 range  2 .. 4;
           | C at 0 range  5 .. 7;         end record;        X : R :=
           | (17,-2,-3);       begin        Put_Line(X'Size'Image); -- 8
           | bits       end T;
        
           | weinzierl wrote:
           | Pascal has it too. I always found it quite natural to specify
           | the range I want with the data type and not as precondition
           | in the function. Another big advantage is that you can avoid
           | a good deal of potential off-by-one errors, if you define
           | your data types appropriately. For example the following
           | definition in Pascal would be much less error prone than the
           | corresponding definition in C[1]:                   var
           | weekday:  0 ... 6;         monthday: 1 ... 31;
           | 
           | For a quarter of a century I wonder why no one seems to miss
           | that feature. I really hope we will get them in C one day.
           | Even more so I hope that the proposals for refinement types
           | in Rust[2] will one day be resolved and become implemented.
           | 
           | [1] https://www.cplusplus.com/reference/ctime/tm/
           | 
           | [2] https://github.com/rust-lang/rfcs/issues/671
        
         | BenoitEssiambre wrote:
         | Yes this is kind of funny. My understanding is that Thomson and
         | Richie deliberately left out non power-of-two word size support
         | (which other languages had at the time because cpu
         | manufacturers were adding a couple of bits at a time to new
         | models (Get this years 14bit cpu, two more than last years'
         | cpu!).
         | 
         | To make a more simple, more elegant, more portable language
         | they decided to settle on power-of-two word lengths. This is
         | similar to how Unix came about, leaving out the cruft and
         | complexity from the over engineered Multics.
        
           | anticensor wrote:
           | The requirement is not to be power of two, it is to be
           | multiple of sizeof(char).
        
             | BenoitEssiambre wrote:
             | I guess I just never encountered the 3,5,6,7 bytes word
             | types.
        
               | vardump wrote:
               | 24-bit floats "FP24" at least have been a thing in some
               | ATI graphics cards (R300 & R420) 15+ years ago. Some DSPs
               | have also used (some still do?) 24-bit word width.
               | 
               | These odd word widths are anything but common, though.
        
         | phoe-krk wrote:
         | Common Lisp programmer here.
         | 
         | While contemporary implementations are most commonly tailored
         | to use (UNSIGNED-BYTE 8), (UNSIGNED-BYTE 16), (UNSIGNED-BYTE
         | 32), and (UNSIGNED-BYTE 64) along with their signed
         | counterparts, our language allows one to freely specify and use
         | integer types such as (UNSIGNED-BYTE 53) that could - in theory
         | - be optimized for on architectures that use unique, by today's
         | standards, word sizes.
         | 
         | This also comes from the fact that Common Lisp was specified
         | during times that had no real standardized word sizes, and so
         | the standard had to accomodate for different machine types on
         | which a byte could mean different and mutually exclusive
         | things.
        
       | fortran77 wrote:
       | I love Erlang for the ability to deal with _bits_. To see this in
       | a compiled language would be wonderful. Of course, you can get
       | down to the bit level with bitwise logical operations, but to be
       | able to express it more naturally would be a great boon to people
       | writing low-level network stuff, and will probably reduce
       | programming errors.
        
       | dang wrote:
       | Speaking of C, if you missed last week's thread with C Committee
       | members, it was rather amazing:
       | https://news.ycombinator.com/item?id=22865357.
       | 
       | Click 'More' at the bottom to page through it; it just keeps
       | going.
        
       | derefr wrote:
       | > While the spelling is undecided, we intend something like:
       | 1234X would result in an integer literal with the value 1234
       | represented in an _ExtInt(11), which is the smallest type capable
       | of storing this value.
       | 
       | That "smallest type capable of storing this value" is a
       | disappointing approach, IMHO. It'd be a lot more powerful to just
       | be able to pass in bit patterns (base-2 literals) and have the
       | resulting type match the lexical width of the literal. 0b0010X
       | should have a bit-width of 4, not 2.
        
         | saagarjha wrote:
         | I wonder if the suffix could be Xn where n is an integer
         | specifying the width.
        
           | nybble41 wrote:
           | Would 0X12 then be a 12-bit integer with the value zero or a
           | hexadecimal `int` literal with a base-10 value of 18? Does
           | this work for other bases (0X12X12)?
           | 
           | I'm not sure why they picked a letter which can already occur
           | in integer literals rather than one of the many unused
           | letters. Given the focus on FPGAs and HDL it's also worth
           | noting that X is commonly used in binary or hexadecimal
           | constants in HDLs to denote undefined or "don't care" values,
           | which could lead to confusion. Rust integer literal syntax
           | would be perfect here (1234u11 or 1234i11) since it already
           | includes the bit width and is compatible with any base
           | prefix.
        
         | floatingatoll wrote:
         | I think your proposal for 0b0010X makes an excellent _addition_
         | to the 1234X proposal. Has it already been discussed by the
         | working group? If not, you should email someone to ask them to
         | consider it!
        
       | waltpad wrote:
       | First of all, I suppose that it will be possible to make them
       | unsigned (just like for standard types). Is this correct?
       | 
       | Also, what's the relationship between standard types and the new
       | _ExtInts? Are _ExtInt(16) equivalent to shorts, or are they
       | considered distinct and require explicit cast?
       | 
       | > In order to be consistent with the C Language, expressions that
       | include a standard type will still follow integral promotion and
       | conversion rules. All types smaller than int will be promoted,
       | and the operation will then happen at the largest type. This can
       | be surprising in the case where you add a short and an
       | _ExtInt(15), where the result will be int. However, this ends up
       | being the most consistent with the C language specification.
       | 
       | For instance, what if I choose to replace short by _ExtInt(16) in
       | the above? What would be the promotion rule then?
       | 
       | Note that it was already possible to implement arbitrary sized
       | ints for a size <= 64, by using bitfields (although it's possible
       | that you could fall into UB territory in some situations, I've
       | never used that to do modular arithmetic).
       | 
       | Edit: Ah, there's this notion of underlying type: one may use the
       | nearest upper type to implement a given size, but nothing
       | prevents to use a larger type, for instance:
       | 
       | struct short3_s { short value:3; };
       | 
       | struct longlong3_s { long long value:3; };
       | 
       | I don't know what the C standard says about that, but clearly
       | these two types are not identical (sizeof will probably gives
       | different results). What's will it be for _ExtInt? How these
       | types will be converted?
       | 
       | Another idea:
       | 
       | what about
       | 
       | struct extint13_3_s {                 _ExtInt(13) value:3;
       | 
       | };
       | 
       | Will the above be possible? In other words, will it be possible
       | to combine bitfields with this new feature?
       | 
       | I guess it's a much more complicated problem that it appears to
       | be at first.
        
       | pjmlp wrote:
       | Currently clang is getting it, if ISO C gets it, it is another
       | matter.
        
       | xyzzy2020 wrote:
       | How does this not break sizeof ?
        
         | SlowRobotAhead wrote:
         | Just returns true now... problem solved!
        
         | tom_mellior wrote:
         | http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf:
         | "_ExtInt types are bit-aligned to the next greatest power-of-2
         | up to 64 bits: the bit alignment A is min(64, next power-
         | of-2(>=N)). The size of these types is the smallest multiple of
         | the alignment greater than or equal to N. Formally, let M be
         | the smallest integer such that A * M >= N. The size of these
         | types for the purposes of layout and sizeof is the number of
         | bits aligned to this calculated alignment, A * M. This permits
         | the use of these types in allocated arrays using the common
         | sizeof(Array)/sizeof(ElementType) pattern."
        
         | saagarjha wrote:
         | It'd probably round up to the nearest byte, as it already does
         | with the boolean types.
        
           | barbegal wrote:
           | Where does the spec say that it does that? As far as I can
           | tell C only allows objects to have sizes in whole number of
           | bytes, and that includes booleans.
           | 
           | Although a _Bool type can be used for a bit field (having
           | size of 1 bit) but you can't use sizeof with a bit field.
        
             | monocasa wrote:
             | Yeah, the unit of sizeof is number of chars, which is
             | usually a byte.
        
               | _kst_ wrote:
               | In C and C++, a char is by definition a byte.
               | 
               | A byte is CHAR_BIT bits, where CHAR_BIT is required to be
               | at least 8 (and is exactly 8 for the vast majority of
               | implementations).
               | 
               | The word "byte" is commonly used to mean exactly 8 bits,
               | but C and C++ don't define it that way. If you want to
               | refer to exactly 8 bits without ambiguity, that's an
               | "octet".
        
               | hermitdev wrote:
               | I think you worded this pretty well. One thing I'd add
               | (and that annoys me about C & C++) is that the size
               | guarantees for integral types boil down to is that
               | CHAR_BIT = sizeof(char) and that sizeof(char) <=
               | sizeof(short) <= sizeof(int) <= sizeof(long). sizeof(T*)
               | (for any T) is not even defined well, and can be
               | OS/compiler specific. Makes cross-platform 32/64-bit
               | support painful, especially because there were no
               | strictly sized integer types before C11 & C++11. Although
               | C11 & C++11 define types like int32_t and int64_t,
               | they're not actually required to be those sizes! The
               | various x-bit types only have to at least be large enough
               | to store x-bits. So, on a hypothetical 40-bit CPU,
               | sizeof(int32_t) could vary well be 40-bits, if that's the
               | natural "word" size for the CPU.
               | 
               | The devil is always in the details, and the devil is
               | very, very annoying...
        
           | wahern wrote:
           | The object size has to be at least the alignment size so that
           | arrays work properly--&somearray[1] needs to be properly
           | aligned, and that only works if the object size is a multiple
           | of the alignment: sizeof myint >= _Alignof(myint) && (sizeof
           | myint % _Alignof(myint)) == 0.
           | 
           | As the proposal says, the bit alignment of these types is
           | min(64, next power-of-2(>=N)). (Of course, the alignment
           | can't be smaller than 8 bits, which the proposal fails to
           | account for.) Assuming CHAR_BIT==8, it follows that:
           | sizeof _ExtInt(3) == 1   // 5 bits padding       sizeof
           | _ExtInt(17) == 4  // 15 bits padding       sizeof _ExtInt(67)
           | == 16 // 61 bits padding
           | 
           | So the amount of padding can be considerable. But that
           | doesn't matter much. What they're trying to conserve is the
           | number of value bits that need to be processed, and in
           | particular minimize the number of logic gates required to
           | process the value. Inside the FPGA presumably the value can
           | be represented with exactly N bits, regardless of how many
           | padding bits there are in external memory.
        
       | Traster wrote:
       | I'd love to know if there's any use to this beyond FPGAs. This
       | just seems to be another case of porting the complexity of RTL
       | design into C syntax so that they can claim they have an HLS
       | product that compiles C to gates. It's not C to gates if you had
       | to rewrite all your C to manually specific the bit widths of
       | every single signal. I really wonder how far we can keep going
       | before the naming police break into Intel Headquarters and rename
       | all their marketing material with "Low Level Synthesis".
        
         | strenholme wrote:
         | Well, encryption research. RadioGatun (SHA-3's direct
         | predecessor), for example, allows the bit width to be any
         | number between 1 and 64, so this will allow us to see how, say,
         | 29-bit integers work with this algorithm.
         | 
         | Most cryptographic algorithms (notably RC5 and RC6, but also
         | Rijndael/AES) can be extended in to 128-bit word size variants,
         | and having guaranteed support for 128-bit integers in C would
         | be useful to see how these variants act, and run programs to
         | evaluate their security margin.
        
         | jschwartzi wrote:
         | It provides another way to represent individual register fields
         | without using bitfields. And probably gives you stronger
         | guarantees about what happens when the bitfield overflows.
         | 
         | It also provides a way to pass those values around without
         | passing the whole register struct around.
        
         | bubblethink wrote:
         | There's also the obvious compression use case. Assuming the
         | rest of your code is sufficiently robust, you can shave off all
         | the excess bits from your data storage. There may be a
         | performance penalty, but you won't have to deal with low level
         | ops or alignment issues. Most real world big data will exceed
         | 32 bits (i.e., the identifiers will exceed 32 bits), but is
         | nowhere close to 64 bits. The benefit is more meaningful if
         | your data now fits in a cache/fast memory whereas it didn't
         | before.
        
         | flohofwoe wrote:
         | Admittedly also hardware-related, but:
         | 
         | Arbitrary bit-width integers are great for writing computer
         | emulator code. There's a ton of odd-width counters and
         | registers in microchips, and being able to map those directly
         | to integer variables instead of having to do a "bit-mask-dance"
         | after each operation at least would increase readability (and
         | probably also add a bit of type-safety).
         | 
         | (Zig also has arbitrary bit-width integers up to 128 bits, but
         | other then that I haven't seen this outside of hardware-
         | description-languages).
        
         | [deleted]
        
         | steerablesafe wrote:
         | Generic code for getting the high 64 part from unsigned
         | unsigned integer multiplication of 64 bit values. Can be useful
         | for fixed-point math for example.
        
         | pjc50 wrote:
         | I was wondering that, and share your skepticism of
         | autotranslation (it basically never works, and the only reason
         | people like it is that the HDLs are stuck in the 80s).
         | 
         | But I think the "no automatic promotion or conversion" combined
         | with "will error if combined with different width" could
         | actually make extint(8) and extint(16) useful - it's a massive
         | hint to autovectorisers and lets you generate the SIMD
         | instructions for those widths.
         | 
         | Doubly so if they make sure never to write the words
         | "undefined" where they mean "implementation-defined" for
         | extint. At the moment normal arithmetic in C (x = x+1) is
         | potentially undefined behaviour.
        
           | qppo wrote:
           | High level synthesis works perfectly fine, just not from C.
           | HDLs have chugged along too, it's just the toolchains are
           | ridiculously expensive and risky to change. That's why
           | hardware tech stacks lag behind the state-of-the-art.
           | 
           | I share the skepticism of high level synthesis _from C_ as
           | being a bad motivation. The workflow is more like
           | metaprogramming, and C is terrible at that.
        
             | gpanders wrote:
             | > High level synthesis works perfectly fine, just not from
             | C
             | 
             | What other HLS languages have you used? I'm not aware of
             | any apart from C (and a limited subset of C++).
        
               | nybble41 wrote:
               | Clash[1] is one HLS language which generates HDL from a
               | large subset of Haskell. The language _is_ Haskell,
               | actually, but not every construct can be synthesized;
               | e.g. recursive data types and FFI calls have no HDL
               | equivalent. Clash hooks into the compiler (GHC) and
               | generates HDL from the intermediate representation. You
               | can also run the Haskell code directly to see the
               | simulated output.
               | 
               | [1] https://clash-lang.org/
        
         | mratsim wrote:
         | Cryptography.
         | 
         | Want to do finite field computation on a 254-bit integer? Now
         | you can (BN254, very popular for zero-knowledge proofs)
         | 381-bit? you're covered.
         | 
         | It's very perf critical and the field modulus bitsize and
         | values are known at compile-time. (example in my library where
         | I basically add to implement the same machinery: https://github
         | .com/mratsim/constantine/blob/ff9dec48/constan...)
        
           | ori_b wrote:
           | Does the compiler guarantee constant time? If not, it's still
           | useless for cryptography. If it does, then it becomes useless
           | for regular work because plain bigint will kick it's ass on
           | performance, especially when doing division.
           | 
           | This is an efficiency hack for fpga.
        
             | mratsim wrote:
             | Constant-time division is super slow for sure but it's not
             | needed for cryptography.
             | 
             | Modulo is also similarly slow but rarely needed as well.
             | For example in elliptic curve cryptography it's only needed
             | at the very beginning of a computation when hashing to the
             | elliptic curve or when producing a secret key from a random
             | input key material.
             | 
             | In terms of speed, LLVM iXYZ wide-integer code is usually
             | faster for basic operations (modular addition,
             | substraction, multiplication) the big slowness is for
             | modular inversion where constant-time inversion is about
             | 20x slower than GMP inversion.
             | 
             | Source: https://github.com/herumi/mcl#how-to-build-without-
             | gmp this code has GMP backend, LLVM i254 / LLVM i381
             | backend, or it's own JIT compiler that uses the MULX, ADCX
             | and ADOX instructions.
             | 
             | Plain bigint in general, including GMP are bottlenecked by
             | memory allocation and the iXXX from LLVM are purely stack-
             | based.
             | 
             | Regarding constant-time, I know that there have been
             | petition to provide a constant-time flag to LLVM but no
             | guarantees so far. Unfortunately you have to take an after
             | the fact verification approach today unless you dropdown to
             | assembly (see
             | https://github.com/mratsim/constantine/wiki/Constant-time-
             | ar... on a couple of constant-time verifiers available)
        
               | remcob wrote:
               | > or it's own JIT compiler that uses the MULX, ADCX and
               | ADOX instructions.
               | 
               | Does LLVM's bigint implementation use those instructions?
               | Last I tried LLVM can not handle the required data-
               | dependencies on individual carry bits, making it
               | impossible to use ADCX/ADOX in LLVM without inline
               | assembly.
        
               | mratsim wrote:
               | LLVM properly generates ADC code when you use
               | __addcarry_u64 contrary to GCC[1] which generates really
               | dumb code with setc/mov in and out of carry. However
               | __addcarryx_u64 which is supposed to generate ADCX and
               | ADOX is not properly handled and only generates ADC.
               | 
               | For the iXYZ themselves, the carries are properly handled
               | and it can generate mulx at least: https://github.com/her
               | umi/mcl/blob/master/src/asm/x86-64.bmi... (from straight
               | LLVM IR). I don't think ADCX/ADOX are possible though
               | even in LLVM IR.
               | 
               | I think you are mixing with one comment on the GCC
               | mailing list about GCC not having the adequate
               | representation of carry and even less a representation
               | that can separate a carry and overflow chains[2]
               | 
               | [1] https://gcc.godbolt.org/z/2h768y [2]
               | https://gcc.gnu.org/legacy-ml/gcc-
               | help/2017-08/msg00100.html
        
             | steerablesafe wrote:
             | Compilers generally never guarantee constant time. The C
             | and C++ standards have no notion of observing timing.
        
               | nwallin wrote:
               | Tangentially related: C++ does have algorithmic
               | complexity guarantees. For instance, std::unordered_map
               | at() and operator[] are guaranteed to be average case
               | complexity of O(1), and std::nth_element is guaranteed to
               | be average case O(n).
        
               | 0xFFC wrote:
               | What is observing timing?
        
               | qznc wrote:
               | One could imagine an annotation for if-statements that
               | both branches should run for the same number of
               | instructions or cycles.
        
               | [deleted]
        
               | steerablesafe wrote:
               | I meant "observation of timing".
        
             | contravariant wrote:
             | I'm somewhat confused. Constant w.r.t to what exactly? You
             | can't have constant time operations on arbitrary bit length
             | integers and once you fix the one parameter you have I fail
             | to see what 'constant' means.
        
               | chubot wrote:
               | Background: https://en.wikipedia.org/wiki/Timing_attack
        
               | pkaye wrote:
               | Constant time w.r.t the input values for a given bit
               | length. For example a multiplier that is faster when one
               | of the value is 0 would not work.
        
               | nightcracker wrote:
               | Constant time operations in the context of cryptography
               | means that the runtime of the operation is not dependent
               | on the data being manipulated.
               | 
               | As an example, suppose you were checking a 256-bit number
               | for equality. This would be unacceptable:
               | bool eq256(uint32_t a[8], uint32_t b[8]) {
               | for (int i = 0; i < 8; ++i) {                 if (a[i] !=
               | b[i]) return false;             }             return
               | true;         }
               | 
               | Why? Because depending on the data this function takes
               | longer or shorter, and timing attacks might be used to
               | figure out secret data. Instead you need an
               | implementation like this:                   bool
               | eq256(uint32_t a[8], uint32_t b[8]) {
               | uint32_t diff = 0;             for (int i = 0; i < 8;
               | ++i) {                 diff |= a[i] ^ b[i];             }
               | return diff == 0;         }
               | 
               | This implementation always takes the same amount of time
               | regardless of the contents of a and b.
               | 
               | It goes further than this as well, you're not supposed to
               | take branches based on secret data nor access memory
               | locations based on secret data as those can be recovered
               | through the branch predictor and cache respectively.
        
               | leni536 wrote:
               | There is no reason that an optimizing compiler doesn't
               | "optimize" your function to non-constant time.
        
               | saagarjha wrote:
               | While downvoted at the time of my response, this answer
               | is correct. The optimizing compiler makes no timing
               | guarantees, so it's not required to use any of the
               | constructs in your second answer _at all_. If you need
               | guaranteed timing, you cannot use standard C (instead use
               | assembly or compiler extensions).
        
               | cyphar wrote:
               | A proper implementation uses the correct compiler
               | intrinsics and hints to cause such operations to be
               | constant-time. Timing problems are a fairly well-known
               | problem in cryptography and constant-time implementations
               | are often required for safe cryptography.
        
               | leni536 wrote:
               | Fair enough. I just wanted to point out that standard C
               | is inadequate on its own. I'm pretty sure that
               | cryptographers are aware of this.
        
               | mratsim wrote:
               | Often that hints boils down to use assembly though.
               | 
               | Constant-time is a pain to achieve and new compiler
               | optimizations might make your code not constant-time
               | anymore:
               | https://www.cl.cam.ac.uk/~rja14/Papers/whatyouc.pdf
               | 
               | Given that I'm writing such a cryptographic library, I am
               | very interesting on the compiler hints you use.
        
               | contravariant wrote:
               | Oh right, that's what's being meant. I confused it with
               | constant time as in O(1).
               | 
               | Wouldn't most implementations of fixed length integers
               | have constant time though? It barely seems worth
               | optimizing unless your integers vary massively in size,
               | at which point using fixed size integers is clearly
               | suboptimal.
        
               | mratsim wrote:
               | I have an in-depth look on constant-time and the security
               | implications of not being constant-time here:
               | 
               | https://github.com/mratsim/constantine/wiki/Constant-
               | time-ar...
        
             | remcob wrote:
             | > Does the compiler guarantee constant time? If not, it's
             | still useless for cryptography.
             | 
             | Side-channel resistant algorithms are only required when
             | you are handling sensitive data. This is often not the case
             | when you are verifying signatures, verifying proofs or
             | generating succinct proofs of computation without a zero-
             | knowledge requirement.
        
         | zamadatix wrote:
         | Network protocols immediately came to mind for me. They love
         | packed structs of weird bit sizes because anything that's not
         | payload is overhead on every message for the rest of time.
         | Fields can also be large, e.g. VLANs are 12 bits, VXLAN ids are
         | 24 bits, MAC addresses are 48 bits, IPv6 addresses are 128 bits
         | so it's not just limited to a couple of small sized bitflag
         | style things.
        
           | SlowRobotAhead wrote:
           | You can already do this in C though, If EVER SO SLIGHTLY
           | wasteful in non-practical terms.
           | 
           | If you need a 23bit object you just structure it to be that.
           | It's a couple of AND or SHIFT ops when accessing, but so
           | what? Even for 100Gbit networking you aren't going to max out
           | even a slightly appropriate CPU.
        
             | zamadatix wrote:
             | The same argument could be made to get rid of all numeric
             | types except the largest. That's likely how it would
             | compile down to on platforms without 23 bit types though
             | just it would be handled automatically based on target. I
             | think the point of such a feature is to abstract you from
             | thinking about if the machine has a native 23 bit type the
             | same way you don't think if the machine has a native 64 bit
             | type or a hardware float type today. Also when you do this
             | manually you're now responsible for tracking the actual
             | type and such too. Beyond that you also want to do
             | operations on these fields not just store them and ignore
             | them, does the IP match this policy? Have I learned this
             | MAC? Is this TOS in an allowed range or does it need to be
             | bleached? Constantly pulling these out and putting them
             | back in, the above work isn't a one time thing.
             | 
             | 100 Gigabit networking eats more CPU than you think,
             | especially if you're actually looking at headers. It's an
             | enormous portion of cloud CPU usage and a big reason
             | networking is still driven by what's easy to put in an ASIC
             | vs running the easier thing in software.
        
             | detaro wrote:
             | You probably could get the compiler to shout at you for a
             | few more mistakes with explicit types?
        
           | Someone wrote:
           | For packed structures, C already has bit fields. Example
           | (from https://en.cppreference.com/w/cpp/language/bit_field):
           | struct S {         // will usually occupy 2 bytes:         //
           | 3 bits: value of b1         // 2 bits: unused         // 6
           | bits: value of b2         // 2 bits: value of b3         // 3
           | bits: unused         unsigned char b1 : 3, : 2, b2 : 6, b3 :
           | 2;       };
           | 
           | This is more aimed at large integers.
        
             | gray_-_wolf wrote:
             | Kinda important part mentioned in the link is
             | 
             | > Adjacent bit field members may be packed to share and
             | straddle the individual bytes.
             | 
             | So it might not be packed. Ofc it usually is, at least on
             | gcc and such.
        
               | hermitdev wrote:
               | I think the bigger issue in regards to networking is
               | ordering, not the (lack of) packing. I don't think the
               | ordering is guaranteed and probably depends on CPU
               | architecture (e.g. big vs little endian), not necessarily
               | compiler.
        
             | drblast wrote:
             | I'm pretty sure the exact layout is not guaranteed in this
             | case, so while the above code may work in many cases it's
             | not possible to represent all data structures like this,
             | particularly if the bits are not byte or machine word
             | aligned.
             | 
             | I don't think the standard even guarantees which bits in an
             | integer the bitfields will be stored. That's important for
             | network protocols.
        
         | elgfare wrote:
         | I can see this being used in serialization protocols.
        
         | moonchild wrote:
         | They fixed autopromotion rules:
         | 
         | > if a Binary expression involves operands which are both
         | _ExtInt, rather than promoting both operands to int the
         | narrower operand will be promoted to match the size of the
         | wider operand, and the result of the binary operation is the
         | wider type.
        
         | steerablesafe wrote:
         | Although it's static size, it could be a building block for
         | bignum arithmetic. The last time I tried compilers were pretty
         | bad at optimizing generic code for bignum addition, although
         | it's pretty easy to hand-optimize in assembly.
        
         | huit wrote:
         | _It 's not C to gates if you had to rewrite all your C to
         | manually specific the bit widths of every single signal_
         | 
         | Ideally, for FPGA design you only have to use the special
         | bitwidths for the interface of a module. The implementation can
         | be in normal wider C types. The compiler can optimize these
         | operations to smaller bitwidths by realizing the higher input
         | bits are zero/signextend and higher output bits are not used.
         | You can help the optimizer by making some variables smaller
         | bitwidths, but no need to rewrite everything.
         | 
         | I implemented this once for a c-to-hardware compiler and it
         | worked quite well. The compiler had a lot of builtin-types, all
         | signed and unsigned integers from 1 to 64 bits wide, named
         | __int1..int64. See 'extended integer types' in the manual:
         | http://valhalla.altium.com/Learning-Guides/GU0122%20C-to-Har...
        
       | 0xTJ wrote:
       | I'm very much in support of this. One thing I like about Zig[1] s
       | that integers are explicitly given sizes. I've been playing
       | recently with it, but I'm waiting for a specific "TODO" in the
       | compiler to be fixed.
       | 
       | [1] https://ziglang.org/
        
       | saagarjha wrote:
       | I wonder if this could help standardize some vectorized code as
       | well.
        
       | detaro wrote:
       | What was wrong with the actual title?
       | 
       | > _The New Clang _ExtInt Feature Provides Exact Bitwidth Integer
       | Types_
        
         | dang wrote:
         | We've changed back to that above. Submitted title was "C
         | possibly gaining support for N-bit integers".
        
         | moonchild wrote:
         | It implements a proposed feature to the c language, which is
         | the more interesting part of it.
        
           | [deleted]
        
           | pjmlp wrote:
           | Not all proposed features get accepted, specially in how
           | conservative WG 14 tends to be.
        
             | hermitdev wrote:
             | Absolutely a true statement, but it should also be noted
             | that WG 14 tends to be more accepting of proposals that
             | have working extension(s) in a major compiler.
        
               | pjmlp wrote:
               | Well, that wasn't enough to rescue Annex K, nor to have a
               | way to solve the impending adoption issues.
        
               | wahern wrote:
               | Which vendor fully implemented Annex K? For several years
               | _after_ C11 was published no vendor fully implemented
               | Annex K, not even the sponsor, Microsoft. I haven 't
               | checked in awhile so maybe things have changed.
        
       | mshockwave wrote:
       | the title is a little misleading. Since _ExtInst is just an
       | extension of Clang not a standard. GCC and Clang both have some
       | hidden features that are not in standard.
        
         | dang wrote:
         | We've changed it now
         | (https://news.ycombinator.com/item?id=22948380).
        
         | mappu wrote:
         | It is on the standards track, though, even if N2472 was not
         | completely accepted it seems like there is a process for this
         | (or something very much like it) to become a standard.
        
       | beefhash wrote:
       | Note that the spec[1] requires that this tops at an
       | implementation-defined size of integers, so you're likely not
       | getting out of writing bignum code yourself (and even
       | fifimplemented, the bignum operations may likely be variable-time
       | and thus unsuitable for any kind of cryptography). Making the
       | size completely implementation-defined also sounds like it'll be
       | unreliable in practice, I feel like making it at least match the
       | bit size of int would be a worthwhile trade-off between
       | predictability for the programmer aiming for portability and
       | simplicity of implementation.
       | 
       | [1] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf
        
         | captainbland wrote:
         | Judging by the motivation section, the motivation is primarily
         | for FPGAs which I guess is why they want to allow these sub-int
         | sized bit values. You might come up with some custom
         | C-programmable operator that is only 3-bits wide where before
         | you're presumably forced to use the smallest available power of
         | 2 word size which would waste resources. So I think actually
         | the idea is that this is for code which is not supposed to be
         | portable at all, but rather hyper-optimised for custom devices.
        
         | [deleted]
        
         | sgeisler wrote:
         | Since I had to implement something like that in rust for a
         | base32 codec [1] a few years ago I really like the idea.
         | Although my main concern was ensuring that invariants are
         | checked by the type system, which might not be as much a
         | concern in c with its implicit conversions?
         | 
         | [1] https://github.com/rust-bitcoin/rust-
         | bech32/blob/master/src/...
        
         | wongarsu wrote:
         | Both Clang and gcc already support a 128bit integer type, so
         | it's certainly possible that "implementation-defined" will end
         | up being 128-bit or 256-bit for x64 targets on common compilers
         | (provided MSVC plays along).
        
       | jhj wrote:
       | Much of my time is spent writing Mentor Catapult HLS for ASIC
       | designs these days.
       | 
       | Every HLS vendor or language has their own, incompatible
       | arbitrary bitwidth integer type at present. SystemC sc_int is
       | different from Xilinx Vivado ap_int is different from Mentor
       | Catapult ac_int is different from whatever Intel had for their
       | Altera FPGAs. It's a real mess.
       | 
       | I'm hoping this is another small step to slowly move the industry
       | into a more unified representation, or at least if LLVM support
       | for this at the type level could enable faster simulation of
       | designs on CPU by improving the CPU code that is emitted. What
       | probably matters most for HLS though are the operations which are
       | performed on the types (static or dynamic bit slicing, etc).
        
         | aDfbrtVt wrote:
         | I'm in the same boat. After having played with all the other
         | vendor libraries, I think I like ac_datatypes the most. It's
         | been really fast and the Catapult is a pretty good engine. Can
         | I ask what industry you're in? I'm in telecom.
        
           | jhj wrote:
           | I work for Facebook.
        
       | waynecochran wrote:
       | At some point they need to branch off and not call it C anymore.
       | C should stay relatively small -- small enough that a competent
       | programmer could write a compiler and RTS for it.
        
         | weinzierl wrote:
         | Yes, keep C clean and add all the cruft to another language
         | derived from C. We could call the new language C++.
        
           | Gibbon1 wrote:
           | My opinion is C and C++ need a divorce. So that C can be
           | modernized with features that make sense in the context of
           | the language. And not constrained as a broken subset of C++.
        
             | detaro wrote:
             | That has already happened though?
        
               | Gibbon1 wrote:
               | I think it's starting to happen because C++ has become so
               | grossly Byzantine. C refuses to relinquish a bunch of
               | niche applications. The heyday of OOP is past. 10 years
               | ago the attitude was C is going to die any day now. Now
               | it more like since C isn't dying it needs improvements.
               | And none of them are backports from C++ nor make sense in
               | C++.
        
         | wongarsu wrote:
         | A lot of the world still runs on C99, and a lot of (toy or
         | academic) compilers are written for C0 (a simple, small, safe C
         | subset). Even when a new C version gains more features you can
         | still develop against C99, C0, or whatever version you prefer.
        
           | waynecochran wrote:
           | The choice is often an illusion. You only get the control the
           | version for code you write and only if you are programming it
           | in isolation. As soon as you work on a larger team project
           | that may also include third party code you no longer dictate
           | what version of C is being used.
        
       | SloopJon wrote:
       | C++ has sped up the pace of its releases, but I don't have a
       | sense of where C is. I didn't realize until I looked it up just
       | now that there's a C18, although I gather that this is even
       | smaller a change than C95 was.
       | 
       | Safe to say that a feature like this would be standardized by
       | 2022 at the earliest?
        
         | GTP wrote:
         | I just found out about C18 thanks to your comment, I was still
         | at C11. Thanks. Anyway I think you're right, except for the
         | fact that I don't like when language designers release versions
         | too quickly. I don't know the situation in the C++ land, but as
         | an example I think that Java took the wrong way.
        
           | hermitdev wrote:
           | Since C++11, the ISO committee has been aiming for a new
           | standard release every 3 years. So far, they've kept this
           | cadence up. I don't recall if C++20 is actually out yet, but
           | I know the feature set was finalized last year, if not out
           | yet, it's probably just due to editorial issues (I've not
           | been using C++ for work the last few years, so my knowledge
           | might be a bit dated).
        
           | jcelerier wrote:
           | > but as an example I think that Java took the wrong way.
           | 
           | I wonder what is the right way then ? Java is apparently too
           | fast for you, and yet it gets improvements so slowly that it
           | is getting its marketshare eaten by other JVM languages
           | moving much faster.
           | 
           | If it was even slower it could as well be put directly next
           | to the dusty COBOL and RPG boxes in the IBM attic.
        
             | saagarjha wrote:
             | This is recent, though. The releases of Java 1-7 were
             | fairly slow and conservative.
        
             | pjmlp wrote:
             | Dusty Cobol?!?
             | 
             | https://www.microfocus.com/en-us/products/visual-
             | cobol/overv...
             | 
             | The lastest standard revision is from 2018, and you can
             | even do OOP if you feel so inclined.
        
           | SloopJon wrote:
           | There's something to be said for the new Java approach of
           | releasing often, with a stable LTS release every now and
           | then, even if Oracle is muddying the waters with their
           | licensing. The only release after 8 that interests me right
           | now is 11. Meanwhile, the features of Java 12, 13, and 14 are
           | available for people who do want to experiment with them.
           | 
           | I think we'll see this implicitly with C++. C++11 and the
           | mostly non-controversial updates in C++14 comprise "modern"
           | C++, whereas adoption of C++17 seems to be a bit slower.
        
       | SlowRobotAhead wrote:
       | > These tools take C or C++ code and produce a transistor layout
       | to be used by the FPGA.
       | 
       | Hmm, I haven't been following that but it seems that...
       | 
       | > The result is massively larger FPGA/HLS programs than the
       | programmer needed
       | 
       | And there it is.
       | 
       | Really seems odd to me to try and force procedural C into non-
       | linear execution of FPGA. Like it seems super odd, and when
       | talking about changes to C to help that... I really don't get it.
       | 
       | This isn't what C is for. What is the performance advantage over
       | Verilog? How many people want n-bit into in C when automatically
       | handled structures work well for most people.
       | 
       | Maybe I'm just not seeing the bigger picture here and that
       | example was just poor?
        
         | Cyph0n wrote:
         | Not to mention that the first statement is simply false...
         | 
         | The final result is a bitstream that determines which LUTs
         | (lookup tables) and BRAM (memory/block RAM) to use on the chip,
         | and how they should be connected/routed.
         | 
         | The FPGA fabric itself is made of transistors, but your C/C++
         | (HLS) or HDL code is not _directly_ controlling these
         | transistors. This is what makes FPGAs so flexible relative to
         | ASICs.
        
       | drfuchs wrote:
       | So, if I have an array of extint(3), does it pack them nicely
       | into 10-per-32-bit-word? Or 21-per-64-bit-word? Will a struct
       | with six extint(5) fields fit into 4 bytes? What about just a few
       | global variables of extint(1)? Will they get packed into a single
       | byte? Did I miss where this is covered?
        
         | tom_mellior wrote:
         | http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf does
         | not mention structs at all, which is disappointing.
         | 
         | I quoted this language below: "_ExtInt types are bit-aligned to
         | the next greatest power-of-2 up to 64 bits: the bit alignment A
         | is min(64, next power-of-2(>=N)). The size of these types is
         | the smallest multiple of the alignment greater than or equal to
         | N. Formally, let M be the smallest integer such that A * M >=
         | N. The size of these types for the purposes of layout and
         | sizeof is the number of bits aligned to this calculated
         | alignment, A * M. This permits the use of these types in
         | allocated arrays using the common
         | sizeof(Array)/sizeof(ElementType) pattern."
         | 
         | But to be honest I don't understand what it's trying to say. If
         | bit width N = 3, the next power of 2 is 4, so would that mean
         | that "bit alignment(?)" A = 4? Then M = 1 is the smallest
         | integer such that A * M >= 3. Then the size of the type would
         | be 4 bits? That wouldn't fly with sizeof.
        
       | rurban wrote:
       | That's what I've wrote to their reddit post:
       | 
       | The feature is of course fantastic. But the syntax still looks
       | bit overblown.
       | 
       | Type system-wise this seems to be more correct:
       | _ExtInt(a) + _ExtInt(b) => _ExtInt(MAX(a, b) +1)
       | 
       | And int + _ExtInt(15) might need a pragma or warning flag to warn
       | about that promotion. One little int, or automatic int pollutes
       | all.
        
         | Traster wrote:
         | Problem is:
         | 
         | _ExtInt(16) + _ExtInt(15) => ExtInt(17)
         | 
         | _ExtInt(17) + _ExtInt(15) => ExtInt(18)
         | 
         | So let's say we have a,b and c. a is 16 bits, b and c are
         | 14bits.
         | 
         | a + (b + c) => ExtInt(17) (a + b) + c => ExtInt(18)
         | 
         | Now obviously this a trivial example, but it highlights the
         | fact that unless you're actually willing to carry the true
         | ranges around in your type system, your calculation of bit
         | widths are going to vary due to the details of which operations
         | are done in which order with which intermediary variables.
        
       ___________________________________________________________________
       (page generated 2020-04-22 23:00 UTC)