AWS VPN site-to-site tunnel behind NAT using pfSense IKEv1 IPsec VPN protocol

Pre-configuration steps

Desired configuration achieved by this tutorial is presented on below diagram.

Diagram 1 Environment for testing AWS VPN using pfSense
Diagram 1 Environment for testing AWS VPN using pfSense

Workstation environment

For the purpose of presentation AWS VPN capabilities, virtual environment had been created on VMware Workstation 14.

Networking – initial configuration

Our Network configuration is presented in the table:

Name IP address Interface Purpose
Gateway pfSense Gateway, ref. pfSense WAN local Windows 2008 Domain Controller, DNS server
linux1 lin_vif Management server eno16777736
pfSense WAN WAN
pfSense LAN LAN

Table 1 Networking – on-premise. (more…)

Read More

HP 3PAR CRC errors in correlation with Brocade SAN

HP 3PAR CRC errors and Invalid transmission word in correlation with Brocade SAN switches

The advantage of HP 3PAR is hidden in monitoring mechanism. One of them is especially useful in case we do not have any special monitoring within our SAN. Thanks to that, we have opportunity to detect possible issue, before it will affect substantially our environment.

3par showhost -lesb

Intermittent CRC Errors Detected

Let’s take a look how HP 3PAR can present such an information to us. First example comes from HP 3PAR Service Processor Onsite Customer Care (SPOCC).

Event type: evt_host_port_crc_errors			     
ID: 30003
Component: Port 3:1:3
Short Dsc: Host Port 3:1:3 experienced over 50 CRC errors (50) in 24 hours Event String: Host Port 3:1:3 experienced over 50 CRC errors (50) in 24 hours
Event String: Port 3:1:3 Degraded (Intermittent CRC Errors Detected {0x2})

Same thing can be checked on HP 3PAR system itself. Just look at the event log.

3PAR-cluster cli% shoeventlog -oneline -startt 1/1/16
2016-01-01 12:33:18 GMT        2 Minor           FC LESB Error   sw_port:2:3:2 FC LESB Error Port ID [2:3:2]-Counters: (Invalid transmission word) (Invalid CRC) -ALPAs:  140700
2016-01-01 12:44:19 GMT        2 Minor           FC LESB Error   sw_port:3:1:3 FC LESB Error Port ID [3:1:3]-Counters: (Invalid transmission word)-ALPAs:  140700


Read More

HP 3PAR disk replacement

HP 3PAR disk replacement. How to deal with failed drive on 3PAR

This article treats of disk replacement on 3PAR for administrators who want to know a little more about the background of disk replacement.

3PAR logical layer

With telling about disk replacement on 3PAR, the logical layer cannot be omited, as this is the fundamental concern around hard drive replacement procedure on 3PAR. The logical layer of 3PAR consist few levels. In overall the structure is not complicated, starting with physical disk and ending on Virtual Volumes.

physical disks (PD) → logical disks (LD) → Common Provisioning Groups (CPGs) → Virtual Volumes (VV)

Physical disks are divided into chunklets, starting with 7000 series, we are talking about 1GB fixed size of chunklets. Then 3PAR is using chunklets to build LDs. This all happen without any administrator involvement. Chunklet is the basic logic unit in 3PAR terminology.  Thanks to this approach, we are receiving nicely virtualized storage, with virtual RAID approach, which gives a lot of more flexibility, also in terms of redundancy. From the other hand, while some blocks within specific chunklet are unreadable, then the whole chunklet (1GB) is marked as failed.

Read More

HP 3PAR 7200 Simulator

Deploying HP 3PAR 7200 Simulator with Windows Server 2012 in VMware Workstation as a part of Virtual LAB environment.

This article is about building/expanding lab environment about HP 3PAR clustered simulator with Windows 2012 Server with Active Directory and DNS server.

Prerequisites for HP 3PAR Simulator Lab

HP 3PAR Simulator for VMware Workstation

Below notes about features and requirements are coming from document HP3PAR StoreServ Simulator Version 3.2.1 MU2 (June 2015) – HP_3PAR_Simulator_Release_Notes_v3.2.1_Z7550-96179.pdf

  • Supported FeaturesThe following 3PAR StoreServ Storage System features are supported:
    • Up to 48 HDDs, 4 cage configuration
    • Cage types – DCN1, DCS1, DCS2
    • Disk types – FC, SSD, NL
    • 3PAR Management Console support
    • CLI and Remote CLI support
    • CIM-API (SMI-S) and WS-API support
    • Storage Provisioning including Thin-Provisioning
    • Exporting Virtual Volumes
    • Adaptive Optimization (AO)
    • Dynamic Optimization (DO)
    • Local Replication (Snapshot)
    • Remote Replication (RCIP) – requires 2 instances of the simulator ( Requires more resources )


Read More

7-Mode Data ONTAP Simulator 8.2.3 – building lab

This article enhance a little more idea of lab presented here Data ONTAP Simulator 7.3.6. The main difference is that now we try reflect (at least reach a concept) production environment, where we have Active Directory domain and data served to hosts through CIFS, NFS, iSCSI protocols.

– 2x 7-Mode 8.2.3
– Linux CentOS 7.0 (at least one)
– Windows Server 2008 R2

Prerequisites for NetApp Simulator Lab

NetApp 7-Mode Data ONTAP 8.2.3 Simulator

  • Dual core 64-bit Intel® architecture laptop or desktop
  • Simulate ONTAP 8.2.x or lower: 2 GB RAM for one instance, 3 GB for two instances.
  • 40 GB free disk space per instance of simulator
  • Hosts running a 32-bit OS require VT support for Intel® based systems or AMD-V (SVM) for AMD® based systems. The feature must also be enabled in the BIOS if not enabled by default.
  • VT support for Intel® based systems


Read More

7-Mode Snapshot directory and nosnapdir

It happens, especially in case of failover\giveback, that CIFS shares dedicated for NetApp snapshot directory are not any more accessible while underlying volume has nosnapdir option turn on.

A nosnapdir option is dedicated for those ones appreciating strong security on their snapshots (due to company security/audit policies). After all, this option restricting access to NetApp snapshot directory not only to clients, but also to filer itself. Our snapshots are really safe, but in case we would like to retrieve some files that names we are not aware of, we can’t use snap restore from CLI.

Read More

[MetroCluster] How to troubleshoot interconnect link down and FC-VI

Recently I faced interconnect failure at one of customer environment. Everything run smoothly until Monday morning I received notification event about interconnect link down.

Cluster Interconnect link is DOWN

In environment that I have pleasure to work with, we have fabric-attached MetroCluster configuration. Between filers there are double HA interconnect cable (attached to the FC switch) and the Heartbeat communication is served via MetroCluster FC-VI card. In case this single two ported card goes down, then we can talk about a little disaster, because without heartbeat messages between nodes we have guaranteed takeover.

Read More

7-Mode Data ONTAP upgrade single node

This topic treats about disruptive Data ONTAP 7-Mode upgrade on single node instance.

Check another post with information What should we do before upgrade?

Data ONTAP 7-Mode disruptive upgrading on standalone system – upgrade step by step

Just before you start with below plan ensure that everything on filer are online (disks, volumes, aggregates, interfaces) and there are no maintenance jobs running like RAID scrubbing, reconstructions or disk wiping.

Read More