Featured

## How I Stumbled Into the #vCommunity and How You Can Too

Normally I’m a technical guy who rarely blogs about things that are not technical in nature. But this month I signed up for a contest called BLOGTOBER: Tech Edition. (Side Note: If you have even a passing interest in blogging, I encourage you to go sign up here.)  I figured I would start off Blogtober  with a thought piece about how I got involved in such things as a blogging challenge.

### How I Got Started

I started out on a help desk taking phone calls. But early on I recognized something that holds true for technology jobs. The person who shows interest and the willingness to learn moves up. So I poured myself into my studies. I tirelessly searched the internet for study material for whatever certification I happen to be studying at the time. This is when I found and fell in love with vBrownbag, which was my first introduction to a technology community.

After about a year at my help desk job, I got promoted and handed one of my favorite projects of all time. The company I worked for did not use virtualization in their environment, but my boss at the time encouraged me to change that. Having no experience with VMware, I was doubtful, but not deterred. Over the next year I converted the majority of our infrastructure from physical to virtual using only the Mastering VMware book by and the vBrownbag YouTube videos. It was an amazing experience.

I am blessed to say that as time passed, I have moved up and changed jobs a few times. I am now a VMware administrator for an international children’s hospital system, a role I truly love. Now that we’ve got the background out of the way, this is where the real story begins.

### My Journey to the vCommunity

In June of this year I had the opportunity to attend HPE Discover 2018. Being excited to go,  I naturally wanted to share. So I get on Twitter and sent out the following tweet:

At the time I rarely used Twitter, so this next part might seem strange to all you Twitter vets out there. I was surprised to find that people I didn’t know commented that they were coming too. One of them was Tim Davis (@vtimd) who I happened to follow on Twitter. So I thought, “Ok, cool. It might be random, but I’m going to say hey to this guy if I see him at the conference.”

Then, the next day I saw this tweet:

At this point, I was really intrigued. Tim Smith (@tsmith_co) sent out a tweet looking to meet up with some people socially at the event. Again this might be the norm for those versed in the vCommunity and Twitter, but for me at the time it was revolutionary. Feeling like I was on a roll from the earlier tweet, I hit LIKE. I thought, “Why not? If I see this guy, I guess I’ll say hey!” For the next week, I searched Twitter looking for events at the conference and people to meet up with. (I actually never got to meet up with Tim Smith, but his idea inspired me nonetheless.)

Fast forward to the conference. By then I was regularly using Twitter and really upping the number of people I followed. I also naturally started to engage more with those people. At the conference, I met Tim Davis (@vtimd) who was working at the VMware booth. Admittedly, it was a bit out of my comfort zone, but I went up to him and introduced myself. In the conversation we started talking about careers and he said, “Getting on Twitter was the best career move I have ever made.” I remember thinking, “What?” It didn’t really register with me then that Twitter could be such a great catalyst to expanding your network both socially and professionally.

Being the vBrownbag fan that I am, I eventually made my way over to the vBrownbag stage. Again, a bit out of my comfort zone, but I started introducing myself to some of the presenters that day. I met Philip Sellers (@pbsellers), Luigi Danakos  (@NerdBlurt), Matt Crape (@MattThatITGuy), and the man himself, Alastair Cooke (@DemitasseNZ). I had a chance to ask questions and hear how these guys got involved in what I would later come to know as the vCommunity. As fate would have it, Matt happened to be doing a talk called “Growing Your Career Through the vCommunity”. You can check it out here. After his great talk, he personally encouraged me to get involved. It was as simple as that. Get involved.

When I got home to Tampa, I committed myself to follow up on Matt’s words. I had been attending our local VMUG, so at the next meeting I voiced a desire to get involved. Turns out there was an opening for a co-leader:

### Now You Do It

So there you have it, my journey into the vCommunity. But its not enough to tell you about my experience without encouraging YOU to “Get Involved.” As it turns out, its not that hard. The community is growing and vibrant. There is no barrier to entry. Ken Nalbone (@kennalbone), in his blog post titled Live Outside Your Comfort Zone. It Will Be Worth It, gives the most practical advice I have seen about how to get involved in the vCommunity. He says, “I decided to march straight up to anyone I recognized that I had not spoken with before and introduce myself.” In a nutshell, its that easy.

Finally, one thing I didn’t consider are the tangible benefits of getting involved in the vCommunity. Through such programs as VMUG Advantage, vExpert and other vendor-sponsored activities, you can really get access to great software and more swag than you know what to do with.

For the tl;dr folks, behold!

##### Jamey’s Guide to Getting Involved in the vCommunity
2. Follow all the people on this list.
3. Genuinely engage with them. It does not matter that you have not met them in person.
4. Don’t be shy, be bold (even if it takes you out of your comfort zone a bit :)) Everyone is welcoming and there are no dumb questions.
5. Watch Matt’s vBrownbag talk at least 5 times.
6. Read these post about others’ experiences: Ken, Tony and another Tony
7. Go to VMUGs/VTUGs/UserCons, basically any technology event you can.
8. Give back, however that looks for you.
9. Most importantly, don’t think you don’t have anything to contribute, because you do.
10. “Get Involved”

## Git 100

Git has become a must-know tool these days. Not only for programmers, but for any would be dev/sys/sec ops types. With the rise of configuration management and infrastructure as code, Git is no longer optional for someone in my line of work.

So What Is Git?

Git is a version control system. Git at its core keeps track of changes. It creates versions of files and allows us to compare versions and see what has changed between them. It also allows for a systematic way to review change history and revert to older versions. It lets us clone other people’s files(repositories), modify them on our own, and later commit the changes we made. These are the big features of Git.

History of Git

The original version control system was SCCS Source Code Control System and was used primarily on the UNIX OS. Next came RCS Revision Control System this was the first cross platform VCS. The next big VCS was Apache Subversion(SVN). Its uniqueness was that it was not just tracking files. Subversion was watching the files in a directory taking snapshots of directories. Transactions committed the entire set of changes to a directory at one time. Finally, Bitkeeper was the most commonly used VCS in the years preceding the creation of Git. The community edition was the main VCS for the Linux kernel. In 2005, Git was created, many say, because Bitkeeper discontinued the community edition. Git was created in 2005 as an open source project and is the most successfull VCS of all time.

Distributed Version Control

Git is a distributed version control system as an alternative to traditional VCS which had one central repo for tracking versions of files. Distributed version control tracks changes as their own entity and can be applied to multiple repositories. There is not one master repository that all other repositories are behind. There can be multiple repositories with multiple change sets. For example:

Repository 1: A, B

Repository 2: A, B, C

Repository 3: A, B, C, D

Repository 2 and 3 are not “behind” repository 1 the way you may think. Distributed repositories allow us to update each repository individually apart from the other. For example, we could update repository 1 with D from repository 3 without issue. But be aware that, by convention, people often have a master repository which everyone commits to, even if it’s not required by Git.

Benefits of distributed version control include no single point of failure, you don’t have to have network access, you can work independently and then later submit changes for review. Git allows the idea of forking a repository. All repositories are considered equal.

How to Install

https://gist.github.com/derhuerst/1b15ff4652a867391f03

This link provides instructions on how to install Git on Windows, OSX or Linux. Note: when installing on Windows make sure to choose “Checkout Windows-style, commit Unix-style line endings” when the installer ask about how to deal with line ending conversions.

Initial Git configuration:

System level:

apply to every user of the computer

User level(Global):

apply to the current user

Project level(local):

apply only to the local repository

The syntax for configuration is git config followed by a flag that identifies at which level you want to set the configuration.

• System
• git config --system
• User
• git config --global
• Project
• git config

Keep in mind that the settings at the lowest level take priority. For example, settings at the user level take priority over the system level setting.

Config File Locations

Location can be found on Windows machines with

git config --list --show-origin

More easily on *nix based systems, the system configuration is at /etc/gitconfig, the user level is at ~/.gitconfig and the project location is at project/.git/config.

Example Time

Note how after Git config we are setting the level of configuration using the –global flags. Also note how the commands are updating the configuration file. The green arrows point to the command first, then to the locations in the configuration file where the modification was made. We could actually edit ~/.gitconfig for example with [user]/name instead of using –global user.name.

Git Help (Not a Pun)

git help

will list the most common Git commands.

git help command

will give you detailed help on the command.

Git 101 coming soon!

## Learning AWS: IAM

Like everyone I have been learning public cloud lately. I figured I would start with AWS based on its ubiquity. Knowing that the best way to learn something is to teach it, I decided to start the Learning AWS series. Whenever I learn a new technology, I like to really hone in on the basics. So this post was born out of that idea.

##### IAM: Identity and Access Management

Above is a screenshot from the IAM console. Note the left column. These are the ideas that make up IAM in AWS.

1. Groups: Groups are a collection of users. AWS does not allow nesting of groups. Users can be members of multiple groups and this is how users get different permission policies. Groups do not have any permissions initially. Policies must first be applied to a group to grant permissions on AWS resources.

2.  Users: Users are typical human user accounts but can also be service accounts. When used as service accounts or for  REST/Query API access, a key ID and secret key will be used in place of a username/password. For example, you could embed the access key ID and secret key ID in software on a web server. This web server could then access AWS resources such as S3 via the account. For simple access to the management console, a username and password is sufficient. Each account whether service or user has a username, password, MFA option and a key ID /secret key option. There is a max number of accounts you can create. Users is not intended to be as a directory service. Like Groups, users also have no permissions by default.

3. Roles: Similar to Groups but not the same. Role can be assigned to more than users. Roles can be assigned to EC2 instances, for example. Instead of a service account embedded on a server as before, we now apply a role to the entire ec2 server instance and it is able to make S3 transactions without any proxy service account. The server itself has the permissions of the role instead of a user having the permissions. Also used for Federated accounts such as Microsoft Active Directory, etc.

AWS defines roles as follows:

“IAM roles are not associated with a specific user or group. Instead,          trusted entities assume roles, such as IAM users, applications, or AWS services such as EC2.” source

Types of Roles:

a. AWS Service will call AWS resources on your behalf as mentioned above with the EC2 instance.

b. Another AWS account is cross account access between an identity provider account and an AWS account. It allows entities in other accounts to perform actions as the AWS account.

c. Web identity allows users federated by external web identity or OpenID Connect providers to assume this role to perform actions in your account. Providers such as Google and Facebook are nativity supported.

d. SAML 2.0 federation is similar to Web identity. It allows users that are federated with SAML 2.0 to assume this role to perform actions in your account.

Note: I pulled some of the wording above from the tool tips in the IAM console.

4. Policies: Policies are used to grant access. Policies are created apart from Users, Groups or Roles. They are applied to Users, Groups or Roles.  Permissions are JSON based statements. JSON is vital for AWS administrators. IAM JSON statements elements are as follows: Version, Statement, Sid, Effect, Principal, Action, Resource and Condition. For now I will only be covering Version, Statement, Effect, Action and Resource. HERE is a list of all the IAM JSON policy elements including those I chose not to cover in this post.

Version, Statement, Effect, Action and Resource

I found a pre-defined S3 read-only access policy from AWS and copied it below. I did update the Resource line to be specific to my S3 bucket but made no other modifications. Lets walk through its elements.

a. Version: This defines the version of the policy language. Think JSON version. The current version is 2012-10-17, so use that if you attempt to write a policy from scratch.

b. Statement: This is the main body of the policy. Statements have the Effect, Action and Resource elements which actually make up the permissions. You can actually have an array of statements in a policy, but my example only has one.

c. Effect: This is the allow or deny option of the rule.

d. Action: This is the “What” of the rule. The action has a amazon service name (ASN): action syntax. For example Action could be s3:List*. This would allow the entity who has had the policy applied to be able to list all of the content in my s3 bucket. You can see in my example below this policy defines Get* and List*, in effect this allows us to get information about anything in the S3 bucket named mybucket without being able to modify its contents. The words following the service,  such as “list”, are specific to that resource and are predefined. It is the actions AWS says you can do to the resource. (Get, List, Put, Etc.)  Here is S3’s actions for example.

e. Resource: I think about resources as an instance of the service. Resources follow the Amazon Resource Name scheme. In essence a resource is an instance of an amazon service such as S3 referenced by a specific naming scheme. For example a bucket in S3 might look like this, arn:aws:s3:::mybucket. So its not S3 in general which would be considered a service, but instead the resource is mybucket. ARN is a blog post all on its own. For now I’m going to leave you with this link to investigate further.

{
"Version"
: "2012-10-17",
"Statement"
: [
{
"Effect"
: "Allow",
"Action"
: [
"s3:Get*",
"s3:List*"
],
"Resource"
: "arn:aws:s3:::mybucket"
}
]
}

5. Identity providers: This allows federated identity with providers that use SAML or OpenID. Active directory integration and management is done using Directory Service as separate AWS service.

6. Account Settings: Here you set password complexity for user accounts and define which regions can request temporary credentials.

##### Wrapping Up
• Users go in Groups.
• Everything else gets assigned a Role.
• Policies define permissions and are applied to Users, Groups and Roles.
• Effects allow you to allow or deny an action.
• Services have actions that can “happen” to them.
• Resources are instances of a service and provide scope for a policy.

## VMUG Advantage: Its Not a Sales Pitch When You Love the Product

My final post of #Blogtober2018 is going to be about a program I love and a must have for anyone who works on VMware technology, VMUG Advantage. My transition to IT is almost entirely self taught. I depended extensively on my home lab as a method to generate experience. Looking for demos, NFR, free trials and home lab licensing became an art form for me. To this day I have never found a better source than VMUG Advantage.

##### VMUG Membership

Note the red arrow. EVALExperince gives you access to practically the entire VMware product line. You don’t have to signup for VMware Advantage to enjoy the benefits of VMUG itself. You can join the VMUG community for free here. This will get you access to meetings and UserCons which are invaluable for networking and just plain fun. But the icing on the cake is EVALExperience.

Financial benefits

The financial benefits for test takers cannot be understated. At the time of this writing the VCAP exam cost 450$. One test will almost pay for Advantage itself. If you currently do not have a VCP or greater, then you will most likely need a training class before you can sit for the exam. These classes can be thousands of dollars. The 20% discount that EVALExperience gives on training classes can quickly become no small sum. Learning and Experience But more to the point, I want to emphasize the main benefit of Advantage is not financial. What was and still is important to me is experience. This is experience that you might never be able to gain access to on your own. When I started out in IT, I put experience gained by the EVALExperience program on my resume. When asked about it by potential employers, I explained that it was in my home lab. I would challenged the interviewer to put me in front of a console and see what I could do based solely on experience I gained from my home lab. “If I can do it, it really does not matter where I learned it from, ” I would say. I would never have been able to say that without EVALExperience. I’ll leave you with a list of the benefits below. Save money. Get experience. Join VMUG Advantage. ##### EVALExperience Discounts 20% Discount on VMware Training Classes 20% Discount on VMware Certification Exams 35% Discount on VMware Certification Exam Prep Workshops (VCP-NV) 35% Discount on VMware Lab Connect$100 Discount on VMworld Attendance

Licensing

Data Center & Cloud Infrastructure
VMware vCenter Server v6.x Standard
VMware vSphere ESXi Enterprise Plus with Operations Management

Networking & Security
VMware NSX Enterprise Edition
VMware vRealize Network Insight

Storage and Availability
VMware vSAN
VMware Site Recovery Manager

Cloud Management
VMware vRealize Operations
VMware vRealize Automation 7.3 Enterprise
VMware vRealize Orchestrator
VMware vCloud Suite Standard

Desktop & Application Virtualization
VMware vRealize Operations for Horizon

Personal Desktop
VMware Fusion Pro 11
VMware Workstation Pro 15

Sources: Licensing Discounts Signup

## Linux LVM: I’m Already Falling Asleep

I have been working on a P2V of some Red Hat nodes and have been doing some Logcial Volume Management updates post conversion. I started thinking about how I wish someone had taught me about LVM when I was first starting out. The abstractness of the idea is a lot to take in, but like most things, once you do it a few times you will develop your own style and ways to remember. We are going to walk through creating an LVM with little insights that I hope help you pick up the concepts faster. What could be more fun. 😉

The four main components of LVM are:

1. The physical storage devices themselves
2. The physical volumes
3. The volume group
4. The logical volumes

Image Source

##### The Physical Devices and the Physical Volumes

This idea was the hardest part initially for me. The physical volume is something you create from the physical device.Its the first logical abstraction. Think of these hard drives as unaltered in anyway. Its a device seen by the OS. That’s it. You could create a partition, format that partition with a file system, and mount it. Instead, with LVM, we partition it in preparation for it to become a physical volume. Technically, you don’t have to partition the disk. You can create a physical volume using the whole disk. But it is best practice to partition the disk, according to the Linux documentation project (of which I’m a fan).

I’ll be partitioning three disks and creating physical volumes from the disks. The disks are /dev/sdb /dev/sdc and /dev/sdd. The command we will be using for partitioning is fdisk. The commands we will be using for physical volume creation are pvs: physical volume show and pvcreate: physical volume create.

Partitioning with defaults:

Physical volume creation with defaults:

The fruits of our labor:

Looking back a the picture above we are now on the light blue line.

##### The Volume Group

The volume group can be thought of as a pool. It is one logical resource. The logical volumes we create on it later are not aware of the physical volumes we created in the previous section. So in that way, I always think about the volume group as the middle man in this setup. This is also the cool part of LVM. We can add more physical disk/ physical volumes later and expand the volume group. Its the thing that gives LVM the ability to expand and shrink compared to traditional storage. I’ll be creating one volume group called volume_group. The commands we will be using for volume group creation are vgs: volume group show, vgcreate: volume group creation and vgdisplay to see details of the volume group.

Volume group creation with defaults:

Volume group details. Notice the VG size. It is the sum of our three disks:

The fruits of our labor:

Looking again at the picture above, we are on the darker blue line.

##### The Logical Volume

Now we will be carving up the volume group into logical volumes. These volumes, like traditional disks, will need to be formatted and mounted. The commands we will be using for logical volume creation are lvs: logical volume show and lvcreate: logical volume creation. I’ll be creating two logical volumes and naming them Logical_volume_1 and logical_volume_2.  I highlighted the volume_group in an effort to show how the command is creating the logical_volume_1 and logical_volume_2 from the “pool.”

Logical volume creation with defaults:

The fruits of our labor, two logical volumes from one volume group:

Looking back at the picture we are now on the pink line.

##### The filesystem and mount:

Now if you look in /dev you will find a device called the name of your volume group. For example mine is /dev/volume_group. We have arrived. We now have something we can format, mount and save files to.

Looking back on the picture we are now on the yellow line.

##### Wrapping Up:

So to summarize: Physical device —-> Physical Volume —-> Volume Group —-> Logical Volume —-> Filesystem Mount

Check out this post for more detail than you ever thought possible on a subject:

Setup Flexible Disk Storage with Logical Volume Management (LVM) in Linux – PART 1

## Don’t Be the Angry IT Guy

I have not always been in IT. My previous roles have been in healthcare as a practitioner. Being able to look from both sides of the glass, I often muse about the perception of IT people . Angry is the most common perception, but also elitist, rude and someone who makes you feel dumb. Us non-IT people would talk around the cooler about how so and so had met one of those criteria. We had one guy who was so rude to a colleague the she would break down and cry. Ironically, I now work in IT and can better understand frustrations. Across a number of roles, I have been in situations where someone might have seen me as any angry IT guy. Now seeing both sides of the fence, I wanted to write a quick post about how we as IT people can bridge the gap.

##### The Angry IT Guy

Stereotypes are often unfair or flat out wrong. But at the same time they can be a window into group perception. Like it or not, the angry IT guy is one of the most common perceptions. It is so common that Saturday Night Live parodied it (Nick Burns). Of course, an IT person will counter that if you had to deal with what they had to deal with, you would be angry too. But I must challenge my IT colleagues by saying that is not good enough. I have found four truths that help me be kind and professional, while still being able to get things done.

##### Truth 1: Computers Can Make Even Smart People Feel Dumb

My first job was at a help desk for an outpatient clinic system. On a daily basis, I worked with doctors and their computers. I would watch and observe the doctors interactions with computers and see their frequent frustrations (often directed at me). But I though hard about how to respond. I could tell that their lack of computer knowledge made them feel dumb. Apart and unrelated to me, they felt dumb. People are accustomed to feeling confident in what they know. This can lead them to avoid working on things they don’t know. Many of the doctors I worked with found it easier to not learn the details of computing and instead direct frustration at IT staff. “This never works” sounds a lot better than, “I don’t know how this works” in the eyes of most people.

The solution was to enable the doctors. While always being mindful of what I was saying and how it might inadvertently make them feel dumb, I would encourage them and guide them to the correct steps. I would often write instructions and tape them to their desks. They where grateful and I excelled. Now you might argue, “We can’t babysit or help people who don’t want to learn.” But frankly, I focused on how the interaction helped me. I was seen as someone who “bridged the gap”. It helped my career and I moved out of the help desk role quickly.

##### Truth 2: Computers Can Make Even Dumb People Feel Smart

Be humble and approachable. Hubris is ugly. IT people deal daily with complex technical details that require absolute precision. So when you have successes, it can be exhilarating. The problem arises when we use computer knowledge as the gauge for someones skill set. Each person is different and they contribute their own knowledge from study and life. It can be easy to view someone as dumb if they ask what we think is a simple question. But the problem is that people see our judgement of them. If you happen to watch the video I linked above, Nick Burns eventually tells the woman to “MOVE”. Everyone around could see that in essence he was judging her as not smart enough. That’s never going to help you in life. Being aware of this has helped me as I have moved up over the years. No matter how complex a system I might be working on, there is always someone smarter than me.

##### Truth 3: People Don’t Have To Learn the Computer Details If They Don’t Want To

Anger often comes from feelings of unfairness. IT people have complained to me in the past that its not fair that they have to teach people even simple things one day only to have the same person ask them again the next day. Frustration quickly builds when people don’t want to learn about computers. But the thing for me is this: they don’t have to. I don’t want to learn the details of how to fix my car. I don’t care. Funny thing is my mechanic does not care that I don’t learn either. He benefits from it. But you might argue again, “Why do I have to do everything for people who are unwilling to learn?” You don’t. Provide them the tools to learn, and in a patient, understanding manner, explain that you can’t help today and encourage them to follow the steps you previously provided. When your perception changes about what they “should” do, so will your experience with them and ultimately, their perception of you.

##### Truth 4: IT Is Not The Point

This one can be hard to swallow for many of us. But ultimately, IT is a tool for a company to succeed. I have seen many IT people, including myself, be offended when people point this out to us.  Our natural reaction is, “This place would not function without me!” But the truth is, a company would fall apart if anyone didn’t do their job. Information Technology, while important, is no more important than other part of the business. Everyone who works for the company is contributing and should be recognized as such. Once again, your perception of yourself shapes how you interact with non-IT staff. If you see your job as more important than others’ jobs, its going to show when you interact with others. There’s no way around that.

I hope these truths have been helpful as you think about how you can bridge the gap in your professional life.

## No Theory Here: Adding ESXi Hosts to a Windows Domain

Here we go, round two for #Blogtober2018 – Tech Edition. The tricky thing about writing technical content for a blog is that most likely it has already been covered and covered in better detail. Today’s post is no different, so I’m going to post some links to great guides that go in depth on how to join an ESXi host to a domain. So if you want more detail or really want to know the “why” and “how”, check these out:

https://kb.vmware.com/s/article/2075361

https://www.altaro.com/vmware/how-to-join-esxi-to-active-directory-for-improved-management-and-security/

http://vcloud-lab.com/entries/esxi-installation-and-configuration/join-domain-esxi-to-an-active-directory-ou-powercli

But my goal in today’s post is function over depth. No theory, only practical application. I’m going to provide a script I wrote to join all the hosts in a specific cluster to my domain., focusing on providing something that quickly gets the job done, while avoiding theory if possible.

##### Part One: The Setup

There are two main things I needed to do in AD before adding ESXi hosts. First, I needed to create an AD Security group to hold accounts that will be used to log into ESXi. This is the group that users must be a part of to authenticate to the ESXi host once joined to the domain.

Save the group name; it will be used as an argument for one of the parameters in the script we use.

The second thing I needed to do was get the canonical name where I wanted the newly created host computer accounts to land once it was created. I had previously created the OU, so all I needed now was to get the canonical name and save it:

Get-ADOrganizationalUnit -Filter "Name -eq 'ESXi Hosts OU you want to use'" -Properties canonicalname | Select-Object canonicalname

Same as before, save the canonical name since you will be using it as an argument later.

Finally, ensure the following are true before running the script to avoid any errors later on:

• Ensure ESXi host and domain controllers share NTP source.
• ESXi host must have an A record in the domain.
• Proper firewall ports must be open on ESXi Hosts. If you have a restrictive setup, be sure to check that the appropriate ports are open.
• Write down the canonical name and security group mentioned above.
• Be sure to run this with both AD and vCenter permissions.
##### Part Two: Function Over Form

I used a function and mandatory parameters to help ensure we don’t forget anything. So to break it down:

1. Connects to vCenter
2. Loops through each host in cluster joining to domain
4. Removes .domain.root for the Set-ADComputer cmdlet
5. Updates AD description with the argument you passed to the $DescriptionUseQuotes parameter function Set-JSESXiDomainJoin { [CmdletBinding()] param ( [Parameter(Mandatory=$true)]
[string]$clusterName, [Parameter(Mandatory=$true)]
[string]$domainInCanonicalNameFormat, [Parameter(Mandatory=$true)]
[string]$user, [Parameter(Mandatory=$true)]
[string]$password, [Parameter(Mandatory=$true)]
[string]$descriptionUseQuotes, [Parameter(Mandatory=$true)]
[string]$ADAdminGroup, [Parameter(Mandatory=$true)]
[string]$VIServer ) #Does check for required modules #Requires -Modules ActiveDirectory #Requires -Version 3 #Requires -Modules VMware.VimAutomation.Core #Connecting to vCenter Connect-VIServer -Server$VIServer

#Loop through each host in cluster
foreach ($esxiHost in (Get-Cluster$clusterName | Get-VMHost)){

#Join host to domain
Get-VMHostAuthentication -VMHost $esxiHost | Set-VMHostAuthentication -Domain$domainInCanonicalNameFormat -User $user -Password$password -JoinDomain -Confirm:$false #Updates advanced settings with AD security group Get-AdvancedSetting -Entity$esxiHost -Name Config.HostAgent.plugins.hostsvc.esxAdminsGroup | Set-AdvancedSetting -Value $ADAdminGroup -Confirm:$false

#Removes domain name from the host, leaving only hostname.
$esxiHostName =$esxiHost.Name.Split(".")[0]

Set-ADComputer -Identity $esxiHostName -Description$descriptionUseQuotes
}
}
##### Part Three: Success

It should look something like this when you run it:

Set-JSESXiDomainJoin -ClusterName "ClusterName" -DomainInCanonicalNameFormat "domain.root/ou/ou" -User "jamey"-Password "secret stuff" -DescriptionUseQuotes "ESXi Host - VMware is the best" -ADAdminGroup "VMware people" -ViServer vcenter.domain.root

## Centralize ESXi Core Dumps

In my environment the majority of the hosts boot from SD cards, so persistent log storage is a big deal. I recently ran into a PSOD issue. VMWare requested the coredumps, which of course, I did not have. Thankfully we were able to sort out my issue via the PSOD screenshot I took from the console.

Obviously wanting to avoid this scenario in the future, I set off to find the best way to keep this from happening again. Interestingly, I found an easy way to do this. There is no direct way via GUI or even advanced host settings. Eventually I was able to get it setup. Below is my method. Enjoy.

##### Setup vCenter

Note: We are running VCSA 6.5 in my environment so the core dump location will be different than if you are using a windows based vCenter server.

First we setup the esxi dump collector on the vCenter server. While logged into the vCenter web interface with an administrator account, click on the Administration link in the vCenter home menu. Next under the Deployment section, click the System Configuration link. From here choose Services under the system configuration header on the left side of the screen. Finally choose “VMware vSphere ESXi Dump Collector.”

From here, click the Manage tab. Then click the pencil icon to change the startup type and set it to run automatically. Next hit the green play button. I can confirm this will not in anyway affect the vCenter server itself, so no worries about affecting production. I kept the defaults for port and size. As noted, if you try to change either of these two settings, it does require a vCenter reboot.

That’s it! Now the vCenter server is ready. Next we need to change the setting on the esxi hosts. I was surprised to find that this must be done at the command line via esxcli. There is no way via advanced settings on the host. The process is straight forward but heavy with administrative overhead. You must ssh to each host and run the following commands. You can also use Host Profiles but that is beyond the scope of this particular post.

##### Setup ESXi Hosts
### Gets the current coredump configuration ###
esxcli system coredump network get

### Sets server address and port to send kernel dump to ###
### vmk0 is management network in my environment
esxcli system coredump network set -v vmk0 -i vCenter IP -o 6500

### Enables sending of coredumps to vCenter server ###
esxcli system coredump network set -e true

### Shows new core dump configuration ###
esxcli system coredump network get

It now lists the settings compared to the first time we ran it.

### Sends test coredump to vCenter Server ###
esxcli system coredump network check

OK, we are done with the host setup. Now to confirm the last step on the vCenter server. Ssh to the vCenter server and check the following log file:

/var/log/vmware/netdumper/netdumper.log

You should see similar entries:

“Posting back a status check reply to…”    SUCCESS!!!

Quick credit to @lamw for this https://blogs.vmware.com/vsphere/2012/12/network-core-dump-collector-check-with-esxcli-5-1.html. This article lays out the steps of this setup in great detail.

##### Automate it

Having gotten it to work on one host, I now had to figure out how to get it working on the rest of my hosts with less typing and less time. Enter powercli. Borrowing a lot from this guy’s technique, I used the Get-EsxCli and a foreach loop to apply the above settings to each host. At the same time, I did a tail -f on the log file to witness the fruits of my labor. It was a good feeling knowing I had saved myself so much work. So without further adieu:

## Set-Citrix

My last couple of posts have been about powershell. Its utility has really be obvious in my day to day job, so I have been writing about what I’m working on.

If you have ever worked in a Xendesktop environment you are familiar with machines going unregistered for whatever reason. I wrote this function to produce five possible outcomes that I needed when dealing with unregistered machines and machines in maintenance mode in a XenDesktop Environment. The parameters are setup as five switches that will kick off Citrix management actions.

The first switch is -MachineUnregistered This searches a Citrix site to find powered on and unregistered machines. It will not perform any action on the machines. It only show you the unregistered machines.

The second switch is -RestartMachineUnregisteredPrompt This searches a Citrix site to find powered on, unregistered machines and prompts you if they should be restarted.

The third switch -RestartMachineUnregistered is the same as the second switch but does not prompt asking if a reboot should be performed. It just reboots all machines that are listed as unregistered. Use this one with caution.

The fourth switch is -MachineInMaint. This searches a Citrix site to find machines in maintenance mode.

Finally, the fifth switch is -TurnOffMaintOnMachinePrompt. This searches a Citrix site to find machines in maintenance mode and prompts asking if maintenance mode should be turned off. I hard coded yes and no, with no validation. So if you type anything besides a literal ‘yes’ or ‘no’, you will not receive an error, but the action will not happen.

Update the variable $AdminAddress to match your DDC. After that you should be good to go. Function Set-Citrix { <# .SYNOPSIS Function that uses parameters as switches to trigger typical Citrx Xendesktop maintenance actions. .DESCRIPTION Switch Parameters to start predefined Xendesktop actions. .PARAMETER MachineUnregistered Gets unregistered machines. .PARAMETER RestartMachineUnregisteredPrompt Prompts asking to restart unregisterd machines. .PARAMETER RestartMachineUnregistered Restarts unregisterd machines without prompt. .PARAMETER MachineInMaint Finds machines in maintence mode. .PARAMETER TurnOffMaintOnMachine Prompts asking to turn off maintenance modes on machines. .NOTES Author : Jamey .LINK https://jamey.info .EXAMPLE Set-Citrix -MachineUnregistered Searches Citrix site to find powered on and unregistered machines .EXAMPLE Set-Citrix -RestartMachineUnregisteredPrompt Searches Citrix site to find powered on, unregistered machines and prompts asking if they should be restarted. .EXAMPLE Set-Citrix -RestartMachineUnregistered Searches Citrix site to find powered on, unregistered machines and restarts them without prompt. .EXAMPLE Set-Citrix -MachineInMaint Searches Citrix site to find machines in maintence mode .EXAMPLE Set-Citrix -TurnOffMaintOnMachinePrompt Searches Citrix site to find machines in maintenance mode and prompts asking if maintenance mode should be turned off, #> [CmdletBinding()] param( [Parameter(Mandatory=$False, ParameterSetName="MachineUnregistered")]
[Alias("MU")]
[switch]$MachineUnregistered, [Parameter(Mandatory=$False, ParameterSetName="RestartMachineUnregisteredPrompt")]
[Alias("RSMUP")]
[switch]$RestartMachineUnregisteredPrompt, [Parameter(Mandatory=$False, ParameterSetName="RestartMachineUnregistered")]
[Alias("RSMU")]
[switch]$RestartMachineUnregistered, [Parameter(Mandatory=$False, ParameterSetName="MachineInMaint")]
[Alias("MIM")]
[switch]$MachineInMaint, [Parameter(Mandatory=$False, ParameterSetName="TurnOffMaintOnMachinePrompt")]
[Alias("TOMOM")]
[switch]$TurnOffMaintOnMachinePrompt ) ##Load Citrix Modules Add-PSSnapin Citrix.*$AdminAddress = 'ddc'

If ($machineUnregistered) { Get-BrokerDesktop -adminaddress$AdminAddress -MaxRecordCount 5000 | where-object {($_.PowerState -eq 'On') -and ($_.RegistrationState -eq 'Unregistered')} | Select-Object MachineName
}

ElseIf ($RestartMachineUnregisteredPrompt) {$UnregisteredDesktops = (Get-BrokerDesktop -adminaddress $AdminAddress -MaxRecordCount 5000 | where-object {($_.PowerState -eq 'On') -and ($_.RegistrationState -eq 'Unregistered')} | Select-Object MachineName) foreach ($unregisteredDesktop in $unregisteredDesktops){ Write-host$UnregisteredDesktop.machinename
$answer = Read-Host -prompt 'Restart Unregistered Machine?' if ($answer -eq 'yes'){
New-BrokerHostingPowerAction -MachineName $unregisteredDesktop.MachineName -Action Reset Write-Host "Unregistered machine name is:$unregisteredDesktop.MachineName"
}
elseif ($answer -eq 'no'){ Write-Host 'Did not restart machine' } } } ElseIf ($RestartMachineUnregistered) {
$unregisteredDesktops = (Get-BrokerDesktop -adminaddress$AdminAddress -MaxRecordCount 5000 | where-object {($_.PowerState -eq 'On') -and ($_.RegistrationState -eq 'Unregistered')} | select MachineName)
foreach ($unregisteredDesktop in$unregisteredDesktops){
New-BrokerHostingPowerAction -MachineName $unregisteredDesktop.MachineName -Action Reset Write-Host "Unregistered machine name is:$unregisteredDesktop.MachineName"
}
}

ElseIf ($MachineInMaint) { Get-BrokerDesktop -AdminAddress$AdminAddress -MaxRecordCount 5000 | Where-Object {($_.InMaintenanceMode -eq$true)} | Select-Object machinename
}
ElseIf ($TurnOffMaintOnMachinePrompt) { foreach ($desktop in (Get-BrokerDesktop -AdminAddress $adminAddress -Filter {(inmaintenancemode -eq$true) -and (desktopkind -eq 'shared' )})) {
Write-host $desktop.machinename$answer = Read-Host -prompt 'Disable Maint for this machine? Answer yes or no'
if ($answer -eq 'yes'){ Set-BrokerSharedDesktop -machinename$desktop.machinename -InMaintenanceMode $false -AdminAddress$AdminAddress
New-BrokerHostingPowerAction -MachineName $desktop.machinename -Action TurnOn } elseif ($answer -eq 'no'){
Write-Host 'Did not put machine in Maintenance Mode'
}
}
}
Else {
Return "Choose Something "
}
}


## Import AD User to Learn PowerShell

I wrote this script for someone who did not use PowerShell on a daily basis. I knew they wanted to create active directory user accounts in bulk from a csv file, so I wrote this with a few goals in mind:

1. Fix their issue. I wanted to show how useful it is learn PowerShell by using it to fix a real problem they had. This allowed them to create the users as desired.

2. Show the steps as plainly as possible, skipping any fancy functions or modules. I tried to write it as a plain sequence of steps.

3. Make it so they can edit the script if their csv input changes, despite them having no scripting background.

I think this should be run from the powershell ISE instead of the command line. If you open it on the left in the ISE and then open ADUC on the right, you can observe the results right away. You can “see” what the script does.

#Imports users from HR csv and sets it to variable

$users = Import-Csv -Path 'c:\path\to\csv' #Loops through each user, creates variable for each property and creates the user foreach ($user in $users){ #These match the column names in the csv$SamAccountName = $user.Username$Name = "$($user."Last Name"), $($user."First Name") $($user.Int)."
$GivenName =$user."First Name"
$Surname =$user."Last Name"
$Initials =$user.Int
$DisplayName = "$($user."Last Name"),$($user."First Name")$($user.Int)."$UserPrincipalName = "$($user.Username)@company.org"
$Description = "$($user."Position") -$($user."Department")"$EmployID = $user."EE #"$OU = "ou=ou,dc=company,dc=org"

#Creates the user

$NewUser = New-ADUser -SamAccountName$SamAccountName -Name $Name -GivenName$GivenName -Surname $Surname -Initials$Initials -DisplayName $DisplayName -UserPrincipalName$UserPrincipalName -Description $Description -OtherAttributes @{'EmployeeID' =$EmployID} -AccountPassword $newpwd -CannotChangePassword$false -ChangePasswordAtLogon $true -Enabled$true -Path $OU -PassThru #Adds new user to group one Add-ADGroupMember -Identity 'group one' -Members$SamAccountName

#Adds new user to group two