A64-OLinuXino update


ะ64

We just got some more information from Allwinner for A64 and the good news is that it have Gigabit Ethernet interface!

So beside the WiFi + BT 4.0 A64-OLinuXino will have also native Gigabit Ethernet interface ๐Ÿ™‚

27 Comments (+add yours?)

  1. Bobby
    Oct 23, 2015 @ 11:34:58

    Then what is the point of H64?

    Reply

  2. Darko
    Oct 23, 2015 @ 11:49:15

    What is price of chip in normal volume?

    Reply

  3. Morgaine
    Oct 23, 2015 @ 16:43:05

    Hooray! That will probably make this my favourite micro-server board.

    It may not be a very fast one computationally owing to the narrow external memory path on this SoC, but gigabit Ethernet on A64 hopefully overcomes the internal bottlenecks that prevented back-to-back frames on the wire and restricted network throughput to around 700 Mbps on 32-bit ARM.

    Morgaine.

    Reply

    • ssvb
      Oct 23, 2015 @ 19:30:52

      The memory interface is actually not too shabby, at least on paper. The same 32-bit dram bus width is also used in Allwinner A10/A20 with a 400MHz clock speed limit, though in practice some devices run it at 480MHz to gain more speed. And A64 has a 667MHz dram clock speed limit according to the posted diagram.

      A31 tried to use 64-bit dram bus width (dual channel 32-bit), but at the same time happened to have severe problems with the clock speed limit (was it just 312MHz?). In practice it means roughly the same bandwidth as the narrower but higher clock speed siblings, but much worse memory access latency. Resulting in an overall performance loss.

      The A13/A23/A33 SoCs only have 16-bit dram bus width. And they are best to be avoided if you are really interested in high performance graphics. These SoC variants had been designed for low resolution 800×480 tablets and don’t need much dram bandwidth.

      By the way, the A10-Lime board is also using only 16-bit dram bus width because of cost reduction reasons. And this is not particularly good for graphics performance. For example, you can check the table at the bottom of the http://ssvb.github.io/2014/11/11/revisiting-fullhd-x11-desktop-performance-of-the-allwinner-a10.html page to see how the dram bus width and dram clock speed affect the glmark2-es2 benchmark score on Allwinner A10 devices.

      To sum it up. If A64 can really clock dram at 667MHz, then the performance should not be an issue.

      Reply

  4. ssvb
    Oct 23, 2015 @ 19:44:32

    And Gigabit Ethernet is an excellent news! Looks like this is going to be a great development board. Assuming that Allwinner eventually publishes the A64 datasheet and the A64 user manual at https://github.com/allwinner-zh/documents like they did for their older chips.

    Hopefully H64 will also have SATA as a differentiating feature, or there will be no reason to upgrade to it ๐Ÿ˜‰

    Reply

  5. SK
    Oct 23, 2015 @ 22:48:34

    Very nice ๐Ÿ™‚

    Reply

  6. ssvb
    Oct 23, 2015 @ 23:00:09

    Actually with the Ethernet available, does it make sense to have the onboard WiFi + BT now? If dropping the WIFI feature can reduce board cost and provide one more USB host port (for connecting USB keyboard and mouse simultaneously), then probably that’s the way to go?

    Reply

  7. Morgaine
    Oct 24, 2015 @ 06:16:50

    I agree with ssvb’s suggestion in the preceding post.

    The spec for the A64-OLinuXino product was put together under the impression that the SoC provided no Ethernet, which made it a very application-specific type of device best suited for portable equipment. Wifi and BT made sense in the spec under those circumstances.

    But now that it is known that gigabit Ethernet is on-chip, the situation changes, and a more conventional (and more useful) wired OLinuXino becomes possible. Wifi and Bluetooth are then no longer necessary for the majority of applications, and add a significant price burden which will reduce sales.

    What’s more, providing both wired and wireless together is not even appealing as a feature, because the two are in conflict. For portable applications, the gigabit Ethernet is a significant power drain if enabled and a waste of hardware and money if completely disabled. For wired applications the opposite is true, the Wifi and Bluetooth only make sense being present if the application is an access point, and otherwise just create a security liability and should be turned off, hence wasting money again. The use case for wired and wireless together is really narrow, and doesn’t create a good product niche.

    Morgaine.

    Reply

  8. OLIMEX Ltd
    Oct 24, 2015 @ 09:18:04

    WiFi+BT 4.0 uses no USB but SDIO interface so USB OTG and USB-HOST remains and I do not see problem here
    The GMAC though is multiplexed with LCD interface, so you have to decide if you want LCD or Gigabit Ethernet, which is not bad as idea. Because if you want to make tblet/laptop with LCD you do not need Gigabit Ethernet but WiFi+BT and if you want to make desktop/server you can use HDMI+GMAC
    We try now to make the PCB to have hardware selectable what do you want LCD or GMAC via jumper on board, so the board to may work in both configurations
    the USB host remains for wireless keyboard/mouse connection like BT-KBD1 or BT-KBD2 we have and we will add LCD standard connector to connect directly to LCD-OLinuXino 4.3 7.0 10.1 15.6 inch LCDs we have in this mode GMAC will be traded for LCD
    if you want to work with HDMI display then you will have all WiFi, BT, Gb ethernet

    Reply

    • ssvb
      Oct 24, 2015 @ 16:32:13

      Thanks for the explanations about the WiFi+BT. Indeed, if they are hooked up via SDIO, then there are no drawbacks.

      Regarding the LCD / GMAC selection jumper. Would it be also possible to make sure that the software is able to read its state via GPIO? This way the bootloader can check the GMAC availability and select the right DTB blob automatically when booting.

      Reply

    • Morgaine
      Oct 24, 2015 @ 18:55:26

      OLIMEX Ltd:
      Understood, that sounds like an effective design. I’m still worried at the effect that having all those options on-board will have on price, but we’ll have to see what you can do there. Either way, it’s sure to be an interesting board and I’m looking forward to it. ๐Ÿ™‚

      Morgaine.

      Reply

  9. LIME-fan
    Oct 24, 2015 @ 11:46:00

    If possible please add 2-4 of those nice 1.27mm-40-pin-headers for lots of gpio/i2c/spi you used on the LIME-boards to be as flexible and compact as possible ๐Ÿ™‚
    Also the LIME connector layout would be very nice where all important connectors are on one side (ethernet, power, USB). This makes the design of custom housings a lot easier.

    Reply

  10. LinuxUser
    Oct 25, 2015 @ 10:03:32

    Woot! Gigabit Ethernet is nice. Except everything else it could allow something like reasonable routers or NAS-like devices.

    But..
    > The GMAC though is multiplexed with LCD interface,

    Dang, what a strange decision. And, hmm, aren’t there some alternate pins for GMAC? But whatever, it appears it will be quite a cool thing overall and it will be great if there’re several different versions available for different occasions. And once its OpenHardware and can be opened in KiCad… hmm, its appears to be best open hardware board around. Hopefully it would be welcome in opensource communities like Sunxi-Linux (after all, vendors SDK suck a lot, and in long run, plain vanilla mainline kernel proven to be far more stable, and I really prefer it on A10 and A20 boards these days for most applications).

    Reply

  11. buzz
    Oct 27, 2015 @ 09:35:51

    Having 8GB eMMC would also be a huge plus – freeing any SD slots for removable storage

    Reply

  12. Thomas
    Nov 02, 2015 @ 11:58:38

    When performance figures for the A64 look promising I would love to see an A64 SOM and some sort of ‘motherboard’ with a BCM53125 7-port switch IC, 3 external Ethernet jacks and the ability to host 4 x A64 SOM. This would make an interesting attempt to build the cheapest ARM cluster ever ๐Ÿ˜‰

    Reply

  13. Morgaine
    Nov 02, 2015 @ 20:41:35

    Thomas: Although it’s not physically elegant, nothing beats an external gigabit switch as the “backplane” of a clustering solution, as switch prices are now rock-bottom, have very high performance, make the cluster easily extendable, and don’t tie you in to one vendor.

    While an “Ethernet backplane” implemented on a PCB with SOM sockets would be interesting, it wouldn’t really give the cluster any advantages other than a small form factor and elegance, at a cost of harsh vendor tie-in and probably very limited future upgradeability. Such a tradeoff doesn’t have a lot going for it unless you need a turnkey system.

    To make a PCB-based cluster backplane appealing enough to offset its inherent disadvantages, it would have to provide additional features such as Lights-Out Management — an on-board terminal server with its own RJ45 for network control of the serial consoles of modules and providing power-cycling capability for each one independently. Without something like that, a proprietary clustering solution is more of a liability than an asset.

    Morgaine.

    Reply

    • Thomas
      Nov 03, 2015 @ 00:12:00

      Since in such a scenario there’s no need to waste space for large connectors like HDMI, GPIO pins and the like I thought about one of the smaller common form factors like SO-DIMM (35x68mm): http://elinux.org/Embedded_Open_Modular_Architecture

      If one uses double-sided carrier-boards with dimensions of eg. 200x40mm then it might be possible to cram into a 1U ‘pizza box’ enclosure 96 A64 SOMs (2 rows with 12 modules each — one module containing 4 x A64 SOM, so we would end up with 384 Cortex-A53 cores per rack unit). If every A64 Soc gets its own Ethernet jack then we would need already 96 switch ports that would need 2 additional rack units containing 48 port switches: 3U needed for 96 A64.

      The approach with a cheap ‘backplane internal’ 7-port switch IC connecting 4 A64 SOMs and 3 Ethernet jacks would mean you could implement some sort of a mesh interconnecting the 24 carrier-boards so you would only need 8 real switch ports instead of 96 for each A64 (since the carrier-boards connected to the switch can provide 2 more links each to other carrier-boards). Therefore you would waste only 1 rack unit containing an 48 port switch (better 52 port switch with 10 GbE uplinks) for 6 rack units full of A64 SOMs (then you get 2.304 Cortex-A53 cores every 7U or 13.440 in a standard 42U rack in total if you use an additional 10 GbE TOR switch per rack as interconnect).

      And yes, I know that the whole approach is a bit brain-dead since if you would try to implement a HPC cluster with performant interconnects one would go the A57/CCN route instead: http://www.anandtech.com/show/8776/arm-challinging-intel-in-the-server-market-an-overview/5

      But I would assume that design approach would cost at least ten times more compared to the ‘el cheapo’ cluster above that would also work in a much smaller scale. On the other hand people even implement clusters made of Raspberry Pis with their horrible ‘one USB 2.0 connection between SoC and the outside’ interconnection capabilities.

      Hmm, unfortunately the whole idea doesn’t match Olimex’ business model at all. ๐Ÿ™‚

      Reply

    • Thomas
      Nov 03, 2015 @ 11:58:45

      Morgaine: I thought again about it and think you’re right. But it still might work at a reasonable price using an 12-port switch IC like the BCM5696 and a carrier-board being able to serve 8 A64 SOM. On the carrier-board an own SoC connected to the SOMs via UART and to one of the BCM5696’s Ethernet ports via SGMII could serve as Lights-Out Management controller being able to power-cycle the SoMs individually.

      Might be interesting for co-location hosters: Since less internal cables are needed and there’s more room for airflow you might get 2 rows with 16 boards carrying each 8 SOMs into an 2U enclosure: 128 SOMs per rack unit instead of 96 and only 11 real switch ports every 2U mean you would only need 4 x 52 port switches per 42U rack and have 38U left to be filled with 2U enclosures filled with 256 SOMs each. Would make up to 4.864 SOMs or 19.456 Cortex-A53 cores per 42U rack. ๐Ÿ™‚

      Reply

      • LinuxUser
        Nov 10, 2015 @ 19:44:21

        One obvious cheat: if you are doing purely local switch device interconnects, which are on-board PCB track connections, as far as I know you can somewhat decling from Ethernet requirements and save on magnetics, using “direct” connections. Since these are very short lanes, it would be okay and far more cheap than full implementation (uhm, 12 x transformers would cost a bit).

        However, it would be completely wrong idea to use Ethernet without transformer if it comes outside of device, because device would get port fried on slightest EMI hitting Ethernet cable. But I guess it does the trick for purely local interconnects.

        And it seems you’ve seriously up for microservers :). Whatever, but I use some few A20 devices are reasonably hybrids of NAS and microserver and it is quite funny. So I wish you luck in your experiment. Most nasty thing about A64 in this regard is lack of SATA port I guess.

    • LinuxUser
      Nov 11, 2015 @ 00:43:46

      Problem with external gigabit switch is: it takes heck a lot of space, in some cases it could be comparable to a rest of system, full of “microserver” thingies. Which isn’t really great, because space in datacenter isn’t exactly free of charge, so the more stuff you can pack in limited amount of space, is better. In terms of $ vs performance, etc. And if one does not really cares about it, they can use huge, power hungry x86 server after all. Yet, “micrsoservers” are interesting because of the following: they are small and so they are cheap per unit, so price of “colocation” of single microserver module could be very low. Yet, it can compete with VM offerings. Having major advantage: system fully dedicated to its user and nobody else would suddenly crash host, hog resources, or whatever. And while schedulers in Linux were seriously uplifted to mitigate such things, dedidated piece of hardware still more predictable. And good luck to lease VM with comparable guaranteed resources at comparable price point. So, actually, it is not uncommon to see some microservers assembled by strange ppl here and there. You see, to keep something like a personal blog, one doesn’t really needs 16-core Xeon with 256Gb RAM, it would do okay even on such board, especially if one is good in administration of Linux and can set up properly, like using nginx to cache dynamic stuff to static temporarily, etc.

      Reply

      • Thomas
        Nov 11, 2015 @ 12:29:59

        For this ‘microserver’ approach using SoM modules the lack of a (m)SATA port could be compensated by fast eMMC (according to the A64’s User Manual “Comply to eMMC standard specification V5.0”)

        But since we now know that Olimex tries to implement an ‘one size fits it all’ aweSOM standard this wouldn’t work with the ‘microserver’ idea since there the eMMC would’ve to be part of the SoM while Olimex will design aweSOM for sure with external storage in mind. ๐Ÿ™‚

      • OLIMEX Ltd
        Nov 11, 2015 @ 13:14:16

        I would restrain to comment aweSOM concept before we have working prototype for at least one SOC as you can see from the other comments some people may decide that this will happen tomorrow and hesitate if to hold current projects of theirs, just to make it clear to prototype, test and be sure you have reliable product which people can use in production at least 6 months will pass, so I expect no real aweSOM product before mid 2016

  14. Andrew
    Jan 05, 2016 @ 12:47:56

    Does it have a USB 2.0 only (no 3.0)?

    Reply

Leave a reply to Darko Cancel reply