Text I type is green, computer replies are purple.
I had a weird issue this morning. A teacher brought in her school Mac. She was unable to authorize the computer to allow screen sharing in Zoom.
Everything in our MDM was set properly. Standard users were allowed to make their own decisions for screen capture1. I clicked on the lock icon to authenticate there. It wouldn’t accept my credentials as the admin user.
A bit about our workflow.
Device in Apple School Manager and assigned to our MDM (Mosyle)
Computer turns on and goes through Automated Device Enrollment (ADE) and hands off to Mosyle
Authenticate to Mosyle via Google
Mosyle installs profiles, Rosetta (on Apple Silicon Macs), skips all of Apple’s setup screens
Creates local admin user
Sets up Google Authentication for user (when user first logs in it creates the user as a Standard User)
So in theory that admin user is setup by Mosyle on first boot.
I check in Users and I see the local admin user there.
The teacher had to leave at this point. I rebooted and decided to run resetpassword in Recovery Mode. It’s changed since I last used it. It asks me to authenticate as a user who I know the password to. I choose the only account there, the admin account. It recognizes the password, but then the only password I can change is the standard user and not the admin user.
Now I don’t have my users passwords, because that’s a really really bad idea. So now I can’t log into the account.
When logging into the admin account, I was getting this error: Error: Credential verification failed because account is disabled.
I tried running pwpolicy disableuser -u admin via Mosyle and got Getting account policies for user <admin>
Well, that’s not helpful.
Let’s try pwpolicy enableuser -u admin.
Enabling account for user <admin>
Not a very useful reply, but okay.
Still cannot log in. Whomp whomp.
Let’s try our old friend dscl
dscl . passwd /Users/admin password
Permission denied. Please enter user's old password:<dscl_cmd> DS Error: -14090 (eDSAuthFailed) passwd: DS error: eDSAuthFailed
This was fun! Okay, let’s log into the computer. Since I can’t get into the user account and I can’t get into the admin account, I need to create a new account. I tried using Mosyle to install a local account, but it didn’t seem to work. Instead I created a new Google Auth profile. Removed the user from the old one and added her to the new one.
Now I can log in with my Google account. I did. I also went into MunkiAdmin, found that computer and added SAP’s great app Privileges.
I opened up Managed Software Update on the computer, installed Privileges, and elevated my privileges to an admin account.
It was then that I saw a couple of things. /Users/admin didn’t exist. I set the admin account to a standard user and tried to change its password. I was told I cannot. I tried to delete the account and was told I could not.
So instead I renamed the account to “adminFUUUUUUUUUU”. I created a new admin account. Lowered my privileges, went into Zoom and I was able to activate Screen Recording. I tested authenticating as the new admin account and it worked. I removed Privileges and revoked access to the app from Munki.
I tested logging in as admin and all was good.
which is called Screen Recording in System Preferences, why are you making everything harder than it needs to be, Apple? [↩]
I wasn’t originally planning to write a blog post about this. I am not on the bleeding edge and others have done it, but I hit some roadblocks along the way and I couldn’t find good answers. In addition, Orlando asked me in #toronto on MacAdmins Slack if I was going to, and how can I say no?
Why I did it
During the pandemic I found that it was a bit painful to get a new Mac up and ready to go out of the box for our teachers at home. While I’m hoping that we will be spending all of the 2021-2022 school year in our actual school, teachers do bring their MacBooks home and Munki will be running in the background and keeping their software up to date off site.
In addition to that, our server was in need of an update. It was acting up in strange ways and couldn’t run the latest version of macOS. If I could put the server into the cloud, I wouldn’t have to worry about managing it. No more certificate updates, no more MAMP, no more copy of MunkiAdmin not working properly.
I could spend $10USD/month for 1TB of data from Wasabi for my 37GB Munki repo. I would get 1TB of data transfer/month and I think that would be more than enough. If I needed to grow it was relatively cheap, and the cost was predictable, unlike Amazon Web Services. If I were to buy a new server, a blinged out Mac Mini with 4 years of AppleCare+ for schools would cost $1800CAD.
Prepping
First thing I did was slim down our Munki repo. I used the repoclean cli tool that is part of the standard Munki install. I didn’t document this and this was quite some time ago, so I’m not sure 100% what documentation I read on it. My Google searching right now is not showing up much from the Wiki. I found it to be pretty straightforward and it worked exactly as expected. I was able to get the repo down to it’s current 37GB.
Since I couldn’t find the documents I used as a resource, he’s here the --help.
Usage: repoclean [options] [/path/to/repo_root]
Options:
-h, --help show this help message and exit
-V, --version Print the version of the munki tools and exit.
-k KEEP, --keep=KEEP Keep this many versions of a specific variation.
Defaults to 2.
--show-all Show all items even if none will be deleted.
--delete-items-in-no-manifests
Also delete items that are not referenced in any
manifests. Not yet implemented.
--repo_url=REPO_URL, --repo-url=REPO_URL
Optional repo URL. If specified, overrides any
repo_url specified via --configure.
--plugin=PLUGIN Optional plugin to connect to repo. If specified,
overrides any plugin specified via --configure.
-a, --auto Do not prompt for confirmation before deleting repo
items. Use with caution.
In previous years I had two campuses to manage, as such I had two Munki servers, we’ll call them Roy and Moss, because they were called Roy and Moss. Moss was the primary and Roy was a read-only copy. The two were synced using Resilio Sync. When our North Campus closed, we made Roy the only one. Since I had Resilio Sync in action on Roy, I set it up so I had a copy of the repo on my local computer. It made using MunkiAdmin much faster. Before I was connecting via SMB and it took forever to open the repo in MunkiAdmin, now it is nice and fast. Once I press save, it automatically syncs to both Roy and Audrie’s1 MacBook.
Now I had a local copy of Munki. Running AutoPKG updates was much easier and quicker, and pushing it to an S3 bucket would be easy peasy (in theory).
Setup a bucket and connect
I setup a demo account with Wasabi, created a bucket and created a read and write key pair for the user. I used the Root Account Key and downloaded the key pair to my computer. I launched Cyberduck and prepared to use for something other than SFTP for the first time.
Keep your Access Key and Secret Key somewhere safe and private.
Setting up Wasabi in Cyberduck
Once in Cyberduck, I was able to test. I was able to create files in the bucket and delete files in the bucket.
Setup AWS CLI
S3 is a standard, I assume made by Amazon Web Services, but I could be wrong. Amazon however have the default command line interface for S3 buckets as part of AWS CLI2.
Installing the CLI was pretty easy and straight forward, however, rather than using the code below, I’d suggest you please visit Amazon’s documentation and get the information directly from them.
Obviously change the PATH/TO to point to your munki_repo, change the BUCKETHERE to your bucket name, and change REGIONHERE to the region you’re stored in.
One thing I would change if I were to redo this from scratch is to get rid of /munki_repo in the S3 path. I don’t need a subfolder in my Bucket. That makes no sense. I would have it look like this instead.
But we’re going to move forward with this as it is.
This part took a LONG time. For those in Canada, you know our internet infrastructure sucks. I’m on DSL with 25 down and 10 up.
EDIT: Setup read-only key
Yikes, I forgot to include this part. In Wasabi I setup a readonly key pair. It’s basically the exact same as this, but a few minor navigation changes for Wasabi.
Going forward use this key pair, you only want the read/write key pair for syncing. Munki clients should be using the read only pair.
To test this pair, I setup a Google Cloud Console session and installed the linux AWS CLI tool and ran the configure command. Once it was setup with this pair I ran this.
This copied the site_default file to the current directory, so I was able to check that it could get the file.
I use CloudShell already for gam, and as such, it was easy to setup there.
Client setup
I have a stack of computers that need to be wiped for new staff. So I grabbed one, used AirDrop to send over Install macOS Big Sur. It was the 11.5.2 installer that I pulled by using Armin Briegel‘s Download Full Installer.
I put that app into the /Applications folder on the target Mac, I right-clicked on the app and chose open. Sometimes it opened, sometimes it jumped in the dock for a while before stopped, at which point I right-clicked on the app again and chose open. Then it would open and I could quit it and run the terminal command.
Obviously, replace “P@55w0rd” with your adminuser password, and replace adminuser with your adminuser’s username.
20-30 minutes later, I had a fresh install of macOS on the client computer. I connected to wifi, and Apple Device Enrollment kicked in. At this point the only thing I needed to do was set the timezone and I was on the desktop.
Then I ran Munki on that computer, AND HOLY SHIT IT WORKED! Wait, why is it working? That shouldn’t work. I’m really confused. Oh well, I have a lineup at my office and I will let that run, then get back to it.
Let’s look at the logs. Oh, it’s pulling from the old server. How is it pulling from the old server? It shouldn’t be pulling from the old server. Oh wait, I never told it to pull from the new server. I missed step 3 in Wade’s instructions. I’m not sure why it’s a separate step, but it is.
I posted to the #munki asking for help in a very detailed way. An anonymous person asked for further information from the logs and confirmed that the middleware was indeed being activated. I didn’t see any keys in the headers from what Munki was pulling, but as I was looking through the python I saw this line.
S3_ENDPOINT = pref('S3Endpoint') or 's3.amazonaws.com'
I recognized endpoints as a thing! Because I was using Wasabi and not AWS, I needed to specify the endpoint in the CLI as above… see:
I ran AutoPKG to get some updated packages for it to pull down, synced it to Wasabi, and reran Munki, and it did it!
Done?
Not quite. Let’s test it again.
Get another computer, do it all, why isn’t it working? Oh, I forgot to set the S3Endpoint again. Stupid Adam.
Okay, this time I’m not fucking up. Adding to Mosyle’s fresh Munki install script a line to install the middleware.
# install middleware for Wasabi bucket
curl https://raw.githubusercontent.com/waderobson/s3-auth/master/middleware_s3.py -o /usr/local/munki/middleware_s3.py
Then I created a custom command in Mosyle to setup ManagedInstalls.plist
I already had one, which does it for the on-prem server. So I duplicated that and changed the settings. I put in exceptions for the old script so it doesn’t go to all my new computers, and told this new one to go to computers enrolled to my staging user.
Test it again! Works like a charm!
Let’s test it on my computer. I remove myself from the list of computers that the on-prem ManagedInstalls custom command is sent to. I added myself to the computers that the Wasabi ManagedInstalls custom command is sent to.
It didn’t work. WHAT THE????? I HATE COMPUTERS!
Turns out, I had the middleware set to install during initial deployment of Munki, and my computer had that initial deployment months ago, before I was using the middleware. So obviously it didn’t work, I didn’t have the middleware installed.
I created a new custom command to just install the middleware and sent that to my entire fleet. Remember before when I had middleware installed and first ran it and it pulled from the old server perfectly? That’s going to happen for my entire fleet until I update the SoftwareRepoURL. So it’s there for the future, but also working with the old system.
Now it worked, and I moved Audrie over to the S3 bucket without telling them. On Monday I’m going to confirm that it is installing correctly, and then we can proceed with moving the rest of the fleet over.
Success!
Things to do
At the end of an AutoPKG run make it auto-sync to the S3.
After save is pressed in MunkiAdmin, make it auto-sync to the S3.
Move the remaining fleet over to the new S3 setup
Pull out the physical server from the server room, pull drive, and recycle
Thanks
Wade Robson who wrote the middleware
Anthony who helped me proof a long-ass post to #munki to get some help
The anonymous person in #munki who none of us knows who he really is *wink wink* who had me pulling out specific lines of code from Wade’s middleware where I found the undocumented request for a pref called S3Endpoint
macfaq and DJH and Treger on Slack for having similar issues and while not able to help, were able to respond to my messages and gave me encouragement
Audrie is the other half of the IT department. [↩]
A few days ago, Anthony Reimer posted to his blog about Recognition, Retirement, and Remembrance. I suggest you give it a read. I don’t have much to say about Retirement and Remembrance, but I do feel I have a lot to say about recognition. While we don’t have any formal place to acknowledge recognition, I feel I can do so here.
First off, I should thank:
Anthony Reimer
I’ve gotten to know Anthony over the past few years. My last trip to MacAdmins we got to spend some time together in person and chat about everything and anything. He’s very knowledgable and extremely supportive of the community. While he’s not in Toronto, he does hang out in #toronto on slack, which has led to us getting to know one another better. Since the pandemic, he’s been coming to #toronto’s virtual meetups helping to bring a cross-Canada presence to the group. He’s also one of the few EDU folks, as such, faces a lot of the things I do.
I guess if we’re talking about one person in #toronto, we should talk about more:
Nathan, David, Jim, and Brad
Brian, François, Armin, “Gerk,” Meg, Colin, Ron, Nick, Dave, Ben, Alex, Sean, Codey, Dmitriy, Joseph, Ross, MatX, and Jason (and more who I’ve probably missed)
I’m trying to be careful here and respect people’s privacy. So I’m just putting down first names, or for someone who once requested in another blog post that I just use his screen name, I’ve used his screen name. The first group of people are on the MacBrainedYYZ committee with me. Together we plan meetups and these people are kind, funny, and wonderful to deal with. Together we have a fantastic group and they help keep #toronto a lively place with a lot of energy. When I first joined #toronto, it was dead. There was no one there.
They’ve helped open the doors to a lot of the people in the second row. I’d already really like to acknowledge Brian for always knowing everything I could ever want about Google Workspace. François for always being around for troubleshooting, and reminding me to file bug reports with Apple. Armin and Gerk for their incredible knowledge for scripting and always being happy to help with a novice like me.
I did a brief co-op during my high school days at Apple Canada. I worked with quite a few people there, most I bet are not at Apple any more. This was 1998 after all. But I’m going to thank:
Tony
Greg
I’ve got to work with some incredible people under me at my current place of employment, in chronological order:
Daniel
Shaun
Audrie
They’ve been fantastic. I’m really lucky to have worked with these three, and they’ve helped me out so much. They all have a very different set of skills and are all awesome.
When I worked at an Apple Authorized Reseller, I got to work with some great people, amongst those I want to point out:
Graeme
Vince
Graeme, Vince and I got really close and have forged a deep friendship. We all now work in EDU and are there to always help one another out.
Robert is always there to provide advice and is a fantastic sounding board. I recall my first time speaking at Penn State, he was in the room as I was preparing, and was able to offer his sage wisdom.
Adam just seems to know everything about everything. He’s the top contributor in #mosyle and one of the admins for the whole MacAdmins slack. Rich is just an endless resource of both hard and soft skills for MacAdmins, his documentation sessions at Penn State are both hilarious and informative. He’s also so open to share.
Former boss:
Matthew
A great guy, but also, I don’t think I’d have a career without him and his guidance.
All of you get, what I’m going to call this award, an Ankie. This is a community worth recognizing, so thank you all.
Oh, this is a story! It’s about Sarah Jane, not Elisabeth.
Apparently every companion came to her service.
Haha! Random tricker plot as a side note.
This is nicer than I expected.
Ace!
Clive’s grown up!
Aww, Luke has a picture of him with him mum framed.
They keep saying him about the Doctor, she’s a she.
The Archive of Islos
Well, the animation is shit, and it looks more like Star Wars than Doctor Who.
Why does that space Dalek have a hullahoop around it?
Do you wish to apply for a membership card?
Robot to Dalek
So far only Daleks and Robots have spoken, and if this series continues like this, I will hate it.
The way Daleks move on the spot when chatting is reminding me of a a little kid who needs to pee.
There’s a lot of long silences for a 14 minute episode.
The Sentinel of the Fifth Galaxy
The green blob is after the Daleks. Now it feels like original Star Trek. That would be better.
Now I want to write comedy shows where we just replace the audio from animated Star Trek.
Oh shit, Green Blob is massacring Skaro.
It must be Fun to voice the DaLEKKKKS! So much UPTAAAALK!
It’s Dalek Stonehenge. Perhaps the most famous of the henges.
That’s some heavy Dalek on Dalek action.
Green Blob is still after them.
Planet of the Mechanoids
More robots. So many robots in this show talking to one another.
Really? These Daleks can’t wait five minutes?
The Deadly Ally
Green blobs are coming for you.
Why do Daleks go extinct so often?
Do you think the Mechanoids will kill the two remaining Daleks? Find out next time on, Daleks!
Day of Reckoning
So the Doctor is not there because she’s not involved in every aspect of Dalek history. However, she is immortal, has lived more lives than the 13 we know. So on a large enough timescale, the Doctor will be there for every instance of Dalek history.
If the Mechanoids really wanted to make the Daleks extinct, they should have just killed them while they were on their planet.
See, now there’s all these other Daleks, that was just stupid.
Shoot the emperor!
The animation is garbage.
So, that was a thing. I don’t recommend watching it.
Doc’s been getting a bit squirmy, running off to do personal stuff and leaving the companions behind. If they’re your best friends, maybe tell them about yourself. That’s what friends do.