The Server Pack 35 is Acromove’s portable server family. Unique to the Server Pack 35 compared to other portable servers is that it is a hyperconverged software-defined system (SDS) that packs the biggest punch into the smallest footprint. The combination of compute, storage, and networking gives users everything they need in one portable box that is designed to work in nearly any environment. Aside from physical environment flexibility, the server is also operating-system- / software-application- agnostic.
Most servers are fixed into a rack and those that are portable are something that would be loaded into the back of a vehicle and taken onsite. The Acromove ServerPack 35 is unique as it comes in a highly portable military-grade Pelican case that is IP67-rated watertight, and dustproof. The server itself is made of aircraft-grade aluminum and carbon fiber making it lighter to transport and setup on site. Since it is portable, vibrations can be a major issue. Acromove addressed this with patented, floating caddies and connectors that do not have direct contact with the chassis. The server can be used in a rack, in the field, on the road, or even floating in water as the case is waterproof (if it flips over, it is done for though).
Getting into the hardware of the server of itself, the ServerPack 35 SP3B leverages Intel Xeon processors with either 4/8/16-core, up to 128GB of ECC RAM, and up to 168TB of storage capacity (12 x 14TB HDDs). The server also comes with several unique features, including GPS/GSM positioning, remote lock/kill data, mission-based system locking, WiFi tower triangulations, DataLogger with hidden SD storage, and environmental sensor measurements. This makes the server stand as a top contender in many use case scenarios for remote IT locations such as off-shore rigs, isolated wind farms, and military operations.
While Arcomove provides the normal data protection features (like encryption) it is not a normal server, it is potentially in motion. To this end, the company has its AcroTrace system to tell users exactly where the server, and the data, is located. According to Acromove, AcroTrace system employs a series of sensors and wireless means of communication and combines verified location data with physical events from the sensors in order to verify the Chain-of-Custody. The cases come with encrypted SD cards similar to an airplane’s black box that can tell if the case is opened or lifted out of the case (as well as if it is dropped or rolled). The case’s “black box” is a result of its pressure, shock, temperature, humidity, proximity, accelerometers, gyroscopes, magnetometers, and GPS/GNSS sensors. Users can leverage the web applications, Device Gateway Platform, to track location and status of the Acromove system in almost real time.
Users interested in purchasing the ServerPack 35 SP3B may request a quotefrom Acromove. Acromove also offers a rental service for 10 days and it can perform a full transfer of 100TB+ (depending on compression) and all the round trips needed (assuming next day shipping) for $2,000, not including shipping costs.
Acromove ServerPack 35 SP3B Specifications
|Capacity||up to 168TB|
|CPU||Intel Xeon D-1541 SOC, 8 cores/16 threads, 2.10 GHz, or D-1587 16/32, 1.70 GHz)|
|Memory||32GB DDR4 RAM ECC RDAM (16GB x 2), 128GB Max (32GB x 4)|
|Number of drives||12|
|Drive types||3.5″ SAS3/SATA3 HDD Helium-Filled|
2.5″ SAS3/SATA3 SSD or NVMe (with adapter)
|Exoskeleton||Pelican Protector 1440 with steel ball-bearing wheels and telescoping handle, weatherproof (IP67)|
|NAS OS||AcroNAS custom OS/Ubuntu Linux. Also user definable for any OS and software|
|OS Drive||64GB SSD SuperDOM (128GB max, optional)|
|ZFS Cache||512GB – 2TB NVMe m.2 SSD card (optional)|
|Standard Interfaces||1 x USB 3.0|
2 x 1 GbE RJ45 copper (1 x 1 GbE RJ45 copper SFP+ on X87 models)
2 x 10 GbE RJ45 copper (2 x 10GbE SFP+ on X87 models, no RJ45)
1 x GbE IPMI port RJ45
1 x VGA
1 x PCIe Gen 3 x16 slot, with riser 2 x8 slots (1 occupied by Avago 9400iSAS 16-Port SAS 12G HBA)
|Onboard Display||4×20 characters OLED for system status and setup|
|Server Operating System||User selectable (any Linux or Windows Server)|
|Satellite Connectivity option||Upgradeable|
|Security Features||TPM/Trusted Execution Technology, Encrypted Boot and OS Guard|
AntiTamper Protection System, Remote System Lock Down
|Encryption||AES-NI Encryption Hardware Acceleration|
AES-256 (provided by OS and applications)
|Environmental Monitoring||Basic Environmental Control (BEC): CPU and PSU temperature, shock, battery level reporting to OS|
Advanced Environmental Control (AEC): Upgradeable
Data stored on secure SD card on chassis
|Physical Asset Tracking||GPS module included|
Iridium Satellite Real Time Tracking Upgradeable
|Acromove Device Gateway||Platform API connection: Upgradeable|
|NFS Supported Operating Systems||Microsoft Windows XP or later|
Mac OS X
Linux & Unix
|Containerization||Docker, Canonical, and CoreOS Rocket; VMWare|
|Dimensions WxDxH||500.38 x 304.8 x 457.2mm (19.7 x 12 x 18 inches)|
Airline length 126.2cm/49.7″
|Weight||18.05Kg (39.8lb) without disks|
26.49Kg (58.4lb) with12 x 12TB HDDs
Softcase 3.8Kg/8.4lb (withstands drop of 2m)
|Power supply||80-264V AC 50/60Hz, 400W|
|Operating Conditions||Temperature 32 – 113 F (0 – 45 C) ambient|
Humidity 95% with open cover, closed cover 1m (3.3ft) waterproof for 30 min.
Altitude 4000m (13,000ft) max
Acoustics 36dbA at 3.3ft. (1m) distance
Shock resistance Withstands up to 100G external shock (HDDs up to 300G non operating)
Design and Build
It’s worth noting that the Acromove ServerPack family shares the same design across all of the models. The server itself is encased in a Pelican 1440 Case, which is a waterproof, dustproof, and crushproof enclosure. While this isn’t a device you’d want to carry around all the time, the strong, lightweight case comes with wheels and a telescoping handle, which allows for easier transportation of the ServerPack 35 SP3B when lugging it around.
Once opened, you’ll see the server housed inside, with the underside of the lid containing an accessory organizer. Taking a look at the top corners of the server (which is made of aircraft-grade aluminum and carbon fiber), you’ll find two air exhausts with a finger grip placed in between. All vents have screened covers to prevent larger debris from falling inside the fan mechanisms. On the left side, starting underneath the air exhaust, are four ethernet ports, the AC power outlet, and the filtered air intake. On the right side, in similar direction as the left, is the IPMI port RJ45, a USB 3 port, and a VGA port. Each port, except the power port, is covered with a snap on lid to prevent dirt or dust ingress. This helps keep everything clean and unexposed depending on the scenario the server is deployed in.
In the middle of the server is the OLED control panel and, directly below that is the brand’s logo. The OLED screen is highly useful and a bit unique compared to most server or status displays. It will tell you the usual information such as power on status, temperature and those sorts of details, but it also gives you operating system details such as the network interface IP address. So when deployed in the field, even if a crash cart with a keyboard and display isn’t available, you can still sort out much of the low-level information of the server through the onboard user interface.
For managment of the Acromove ServerPack 35 SP3B we leveraged the FreeNAS OS.
Upon installation of FreeNAS users will be able to navigate to the web interface to continue configuring FreeNAS, we’re going to walk through the menus and common configuration changes. After logging in users are provided with the FreeNAS Dashboard that gives information about the FreeNAS system such as the OS version, processor, amount of RAM/Memory, bandwidth on the primary network, information about the FreeNAS Pools, and some quick reporting about CPU and Memory usage, as well as CPU temperature and load average on the system. On the left hand menu of the screen we see options for accounts, system, network, etc.
General: This includes options such as what protocol to utilize for the web interface (port 80 or port 443 for SSL), what certificate to utilize for SSL, whether or not to bind to a specific IP address and if to use the default port 80 and 443 for SSL as well as to redirect HTTP to HTTPS, selecting the language, console keyboard map, timezone, and what sys log level to log and if to send data to a sys log server
NTP Servers: This allows users to add and remove NTP servers for synconizing the system time with time servers around the world, by default FreeNAS ships with FreeBSD’s ntp pools in use.
Boot Environments: FreeNAS allows for multiple boot environments, this makes updating FreeNAS somewhat lower risk and allows users to roll back to the previous environment. This will also display the status of the boot pool.
Advanced: This menu allows users to set advanced options such as to show the console without a password prompt, enter serial console info, swap size, enable or disable autotune which can optimize the system for the specific hardware, enable and disable debug kernel for next reboot, as well as adjusting the MOTD banner, what FQDN to utilize for logging, and to report CPU usage in percentage as well.
Email: Allows for configuration of email alerts that the system can generate.
System Dataset:This is utilized to select the pool that will contain the persistent system dataset, the system dataset stores debugging files and samba4 meta data.
Alert Services: This contains options in which FreeNAS can sent alerts to the owner/operator of FreeNAS. There is also Alert Settings that Allows for customization in which how quickly FreeNAS will send an alert.
Cloud Credentials: FreeNAS is capable of using services such as Amazon Cloud Drive, Amazon S3, BackBlaze, and others.
Tunables: This options allows for changes to the FreeBSD kernel that runs FreeNAS as well as additional parameters.
Updates: This allows users to check for and install FreeNAS updates.
CA’s: View and manage the Certificate Authorities that FreeNAS utilizes.
Certificates: View and manage certificates that FreeNAS can utilize for the web interface.
Support: This option allows users to generate a bug report directly on FreeNAS.
Cron Jobs: This daemon runs a command or script on a regular schedule.
Init/Shutdown Scripts:This allows for a command or script to be run at start up or shutdown of the system.
Rsync Tasks: This utility allows data to be synchronized between FreeNAS systems.
S.M.A.R.T. Tests: SMART stands for Self-Monitoring, Analysis and Reporting Technology, this allows the system to monitor harddrives and possibly detect and report potential harddrive failures and allow preemptive replacement.
Periodic Snapshot Tasks: This allows for scheduled snapshots that create read only versions of pools and datasets at any given time. Snapshots keep a history of files and provide an easy way to recover an older copy or possibly a deleted file.
Replication Tasks: This allows for the duplication of snapshots between FreeNAS systems.
Resilver Priority: Reslivering, or rebuilding, is the process of copy data to a replacement disk. Ideally we’d want this process to complete a quickly as possible, but this allows for adjustment of reslivering.
Scrub Tasks: A scrub is how ZFS scans through the data on a pool and helps to identify any issues with data integrity and detection of silent data corruption.
Cloud Sync Tasks: Allows files or directories to be synchronized to remote cloud store providers.
Global Configuration:This allows for configuration of global network settings that is not specific to a single network interface.
Interfaces: This menu will present any interfaces that are manually configured and allows for adding or editing a manually configured network interface.
Link Aggregations: This allows for combining of multiple network interfaces into a single interface and can provide fault tolerance or high multi-link throughput.
Static Routes: No static routes will be on FreeNA by default, this option can be used if a specific portion of a network is required to be reachable.
VLAN’s: This menu allows for the configuration of FreeNAS to communicate over a VLAN.
Pools: This menu allows for creation of a data pool as well as displays any current pools created. Options for this also include creating an encrypted pool. Please be sure to read up on the various ZFS RAID levels.
Snapshots: This will display any snapshots that have been created and create new snapshots.
VMWare Snapshots: This allows for cooperation between ZFS snapshots and VMware Datastores.
Disks: View all disks currently recognized by FreeNAS, this will display the disk name, serial number, disk size, and description that has been entered, as well as any addition info such as transfer mode, HDD standby settings, and Adv Power Management.
Importing Disk: This can be used to import disk(s) that are formatted for a file system FreeNAS can recognize.
Active Directory: Allows for integration with Windows Active Directory, this will allow users to enter the domain name and credentials. LDAP: Allows for integration with Windows, Mac OS X Server, and OpenLDAP running on BSD or other Linux systems.
NIS: This allows for configuration of NIS (Network Information Services).
Kerberos Realms, Keytabs, and Settings: These options can be used in conjuction with Active Directoru or LDAP without a password.
AFP: This allows for configuration of Apple File Protocol.
NFS:This allows for Network File System configuration.
WebDAV: This allows for configuration of WebDAV shares.
Windows (SMB) Shares:FreeNAS is capable of creating Samba shares using the SMB protocol that can be used by Windows, OS X, and some Linux systems.
iSCSI: This allows for configuration of iSCSI shares, this could allow FreeNAS to act as a SAN (Storage Area Network) and could present disks to VMware ESXi or even Windows.
Services: This menu allows for configuration of various FreeNAS services, allowing for toggle of running or stopped, start automatically, or edit the actions of the services.
Available Plugins: This menu will list the available plugins FreeNAS is capable of installing, these Plugins includeBacula, Gitlab, NextCloud, Plex Media Server, and others.
Plugins Installed: This will list any installed plugins on the system.
Jails: This menu lists any jails currently created as well as creation of a new jail, a jail is a lightweight operating system that can isolate services from the FreeNAS host OS itself.
Reporting Menu: This allows for selecting reporting on CPU, Disk, Memory, Network, Partitions, System, Target (iSCSI), and ZFS.
CPU: This reporting shows the time spent by the CPU on various items such as executing user code, system code, and idle time.
Disk: This reporting shows the read/write statistics on I/O, percent busy, latency, and operations per second as well as pending I/O requests and disk temperature.
Memory: Displays the memory usage and usage of swap space.
Network: Displays transmitted and received traffic in megabytes per second for each configured network interface.
Partition: Displays free, used, and reserved space for each pool and dataset.
System: Displays the number of processes.
Target: This displays the bandwidth statistics for iSCSI ports.
ZFS: Displays the ARC size, hit ratio, demand data, demand meta data, and pre-fetch data.
Virtual Machines: As of recent FreeNAS is capable of running virtual machines via Bhyve virtual machine software. This allows multiple guest OS’s to exist on FreeNAS such as Windows and other versions of Linux.
Shell:The web interface of FreeNAS allows a user to execute commands via the web browser when logged in as the root user.
Enterprise Synthetic Workload Analysis
Our enterprise shared storage and hard drive benchmark process preconditions each drive into steady-state with the same workload the device will be tested with under a heavy load of 16 threads with an outstanding queue of 16 per thread, and then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. Since hard drives reach their rated performance level very quickly, we only graph out the main sections of each test.
Preconditioning and Primary Steady-State Tests:
- Throughput (Read+Write IOPS Aggregate)
- Average Latency (Read+Write Latency Averaged Together)
- Max Latency (Peak Read or Write Latency)
- Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)
Our Enterprise Synthetic Workload Analysis includes four profiles based on real-world tasks. These profiles have been developed to make it easier to compare to our past benchmarks as well as widely-published values such as max 4k read and write speed and 8k 70/30, which is commonly used for enterprise drives.
- 100% Read or 100% Write
- 100% 4K
- 8K 70/30
- 70% Read, 30% Write
- 100% 8K
- 8K (Sequential)
- 100% Read or 100% Write
- 100% 8K
- 128K (Sequential)
- 100% Read or 100% Write
- 100% 128K
For our testing, we configured the Acromove ServerPack 35 SP3B in RAID-Z1 with Compression. With the primary deployment model of this server being used to import and export data on-site, our tests focused on file-level CIFS testing connected to our test server via twin SFP+ 10G connections. The anti-shock technology allows the use of high capacity/low cost HDDs in the field without worrying about damage to the disks, but SSDs could also be installed if faster throughput and more IOPs are desired.
In the first of our enterprise workloads, we measured a long sample of random 4K performance with 100% write and 100% read activity. Looking at IOPS, the ServerPack 35 SP3B showed a performance of 670 IOPS write and 809 IOPS read. With 4K average latency (where lower is better), Acromove server hit 381.71ms write and 316.21ms read. Next, we take a look at 4K max latency. Here, the ServerPack 35 SP3B showed a performance of 2,514.9ms write and 1899.5ms read. For our last 4K test we looked at standard deviation. Here, we saw performance figures of 670ms and 809ms in write/read, respectively.
Compared to the fixed 16 thread, 16 queue max workload we performed in the 100% 4K write test, our mixed workload profiles scale the performance across a wide range of thread/queue combinations. In these tests, we span workload intensity from 2 thread/2 queue up to 16 thread/ 16 queue. In throughput, the Acromove server kicked off the test at 11,841 IOPS and finished at 9,579 IOPS. Looking at average latency performance figures, the ServerPack 35 SP3B showed a burst performance of 0.33ms and remained relatively low throughout most of the test. The server finished off the test at 26.7ms, nearly double that of the next highest figure. Next, we look at max latency. Here, the ServerPack 35 SP3B showed steady performance throughout, ranging from 696.46ms at burst and finishing off at 763.31ms. For our final 8K test, we look at standard deviation. Here, the ServerPack 35 SP3B posted scores of 2.42ms to 31.74ms.
Our next benchmark measures 100% 8K sequential throughput with a 16T16Q load in 100% read and 100% write operations. Here, the ServerPack 35 SP3B was able to hit 32,059 IOPS write and 53,593 IOPS read. The last Enterprise Synthetic Workload benchmark is our 128K test, which is a large-block sequential test that shows the highest sequential transfer speed for a device. In this workload scenario, the ServerPack 35 SP3B posted an impressive 2,251,162 KB/s write and 2,314,547 KB/s read.
Acromove’s ServerPack 35 SP3B is one of several available models in the ServerPack family. The server comes encased in an extremely sturdy, durable, water/dustproof Pelican Case, providing enhanced protection for the critical components inside. The server can leverage 4, 8, or 16-core Intel Xeon processors, up to 128GB of ECC RAM, and has a storage capacity of up to 168TB. ServerPack 35 SP3B achieves up to 3TB/hour using HDDs, which makes it very efficient for transferring large datasets between datacenters and cloud upload sites. 100TB can be moved in about a week including shipping time. It is also available from Acromove on a short term rental basis.
For performance, we looked at performance of the ServerPack 35 SP3B in RAID-Z1 with Comp CIFS configuration. During our 100% read/write random 4K test, the server posted results at 670 IOPS write and 809 IOPS read. 100% 8K sequential showed improved results, as expected, at 32,059 IOPS write and 53,593 IOPS. For our 128K test, the ServerPack 35 SP3B saturated the dual 10G connection we were leveraging, posting 2,251,162 KB/s write and 2,314,547 KB/s read.
Overall, we couldn’t be more impressed with this portable storage solution. Any situation that may require taking a server onsite, users may want to take a serious look at the Acromove ServerPack 35 SP3B.
Source : Storagereview.com