Browser speed tests

Tags

, , , ,

With the new speedometer 3.0 speed tests, I ran some tests with various browsers on my System76 2018 Galago Pro laptop.

Current specs are (via the inxi terminal command):

System:
Host: Hrothgar Kernel: 6.6.10-76060610-generic x86_64 bits: 64
Desktop: Xfce 4.16.0 Distro: Ubuntu 22.04.4 LTS (Jammy Jellyfish)
Machine:
Type: Laptop System: System76 product: Galago Pro v: galp3
serial: <superuser required>
Mobo: System76 model: Galago Pro v: galp3 serial: <superuser required>
UEFI: American Megatrends v: 1.05.08RSA-1 date: 12/08/2017
Battery:
ID-1: BAT0 charge: 35.2 Wh (100.0%) condition: 35.2/35.3 Wh (99.7%)
CPU:
Info: quad core Intel Core i5-8250U [MT MCP] speed (MHz): avg: 656
min/max: 400/3400
Graphics:
Device-1: Intel UHD Graphics 620 driver: i915 v: kernel
Device-2: Chicony USB2.0 Camera type: USB driver: uvcvideo
Display: x11 server: X.Org v: 1.21.1.4 driver: X: loaded: modesetting
unloaded: fbdev,vesa gpu: i915 resolution: 1: 1920x1200~60Hz
2: 1600x900~60Hz
OpenGL: renderer: Mesa Intel UHD Graphics 620 (KBL GT2)
v: 4.6 Mesa 23.3.2-1pop0~1704238321~22.04~36f1d0e~dev

With FireFox:

FireFox speedometer 3.0 results

With Vivaldi:

Vivaldi speedometer 3.0 results

With Opera:

Opera speedometer 3.0 results

With Chromium:

Chromium speedometer 3.0 results

For comparison, here are the results from Safari on my Apple iPhone 13 mini:

iPhone 13 mini Safari speedometer 3.0 results

While the iPhone may score a higher benchmark, the laptop is still much faster and useful in practice. The surprise result here — in light of all the press coverage about Chrome being faster — is that FireFox scored fastest on the laptop.

Video acceleration in Firefox for radeon graphics card

Tags

, ,

Video acceleration in Firefox (all recent versions of the past year and more) has been a problem on my desktop computer with its radeon graphics card. See my setup. Various websites would have videos appear as glitches and even lead to the xfdesktop freezing/crashing and needing to be killed (xfdesktop will automatically restart after being killed).

Thanks to UbuntuHandbook for its advice at Enable Hardware Video Acceleration (VA-API) For Firefox in Ubuntu 20.04 / 18.04 & Higher and Get Firefox VA-API Hardware Acceleration working on NVIDIA GPU, I finally got video acceleration to stop glitching and take advantage of the radeon graphics card in the desktop.

First, I followed the advice in these posts to run vainfo.

$ sudo vainfo
[sudo] password for user: 
error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 1.14.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/r600_drv_video.so
libva info: Found init function __vaDriverInit_1_14
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.14 (libva 2.12.0)
vainfo: Driver version: Mesa Gallium driver 23.2.1-1ubuntu3.1~22.04.2 for AMD TURKS (DRM 2.50.0 / 6.5.0-18-generic, LLVM 15.0.7)
vainfo: Supported profile and entrypoints
  VAProfileMPEG2Simple            :    VAEntrypointVLD
  VAProfileMPEG2Main              :    VAEntrypointVLD
  VAProfileVC1Simple              :    VAEntrypointVLD
  VAProfileVC1Main                :    VAEntrypointVLD
  VAProfileVC1Advanced            :    VAEntrypointVLD
  VAProfileH264ConstrainedBaseline:    VAEntrypointVLD
  VAProfileH264Main               :    VAEntrypointVLD
  VAProfileH264High               :    VAEntrypointVLD
  VAProfileNone                   :    VAEntrypointVideoProc

So, the graphics card should handle basic web video. And, I know radeon is my graphics driver after running the following command:

$ inxi -G
Graphics:
Device-1: AMD Turks GL [FirePro V3900] driver: radeon v: kernel
Display: x11 server: X.Org v: 1.21.1.4 driver: X: loaded: ati,radeon
unloaded: fbdev,modesetting,vesa gpu: radeon resolution: 1: 1920x1080~60Hz
2: 1920x1200~60Hz
OpenGL: renderer: AMD TURKS (DRM 2.50.0 / 6.5.0-18-generic LLVM 15.0.7)
v: 4.5 Mesa 23.2.1-1ubuntu3.1~22.04.2

I then configured advanced settings in Firefox — about:config in the address bar — in the following manner:

media.ffmpeg.vaapi.enabled  true
gfx.x11-egl.force-enabled   true
widget.dmabuf.force-enabled true

Note: I tried other recommended changes in the UbuntuHandbook posts, but those changes were ineffective for my setup (the glitching came back).

Finally, I added the following comment and lines to my .bashrc file:

# mozilla video firefox fixes in .bashrc file
export LIBVA_DRIVER_NAME=radeon
export MOZ_DISABLE_RDD_SANDBOX=1
export MOZ_X11_EGL=1

Note: My setup is pretty basic, so I do not have much customization in my .profile file. While there are many aliases are in .bash_aliases, all other customization is in .bashrc.

After quitting Firefox, I then typed source .bashrc in the terminal to reload the .bashrc file and ran Firefox from the terminal by entering the command firefox. A test of a website that was previously glitching showed that these fixes resolved the problem: no more glitching and radeontop showing some work now being done by the graphics card when the video was playing inside Firefox:

radeotop showing graphics card processing video within Firefox

Limits of AI and LLM for attorneys

Tags

,

Note: Creepio, an AI, is a featured player among Auralnauts.

The current infatuation with Artificial Intelligence (AI), especially at the state bar which is pushing CLEs about how lawyers need to get on the AI bandwagon, is generally an un-serious infatuation with a marketing concept.

AI and LLM – language learning models, on which much of recent AI is based – has nothing to do with accuracy. So, for a legal practice or any kind of professional activity in which accuracy is priority number one, AI and LLM are a pipe dream. A lawyer cannot be wrong about whether a murder or misconduct took place in one out of every hundred cases. Rather, a lawyer needs to get the difference between the two events 100% of the time. But, AI/LLM is using predictive analytics – an algorithm – to decide whether a murder or misconduct occurred without consideration of the actual facts at issue.

Yes, there is much in life for which accuracy is not important. For those, AI/LLM will be incredibly useful for making connections across all of the data and meta data that is now being collected for us. Simply being better than a coin flip in guessing at something can be a major advance for certain kinds of work. More on that below.

Note: Most AI/LLM applications, so far, are less accurate than a coin flip. See ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate.

For now, there is a more immediate problem for lawyers. Participation by lawyers with their client’s information in this AI/LLM modeling is likely to run afoul of confidentiality concerns that lawyers have to maintain, as these AI/LLM models are designed without any confidentiality or are even intended to merge “data” across clients, all the while “futzing” what the models are doing with all of that data.

Bruce Schneier did a post on Slate that ars technica expanded on. The ars technica folks spotlight the lack of confidentiality built into AI/LLM:

We’ve recently seen a movement from companies like Google and Microsoft to feed what users create through AI models for the purposes of assistance and analysis. Microsoft is also building AI copilots into Windows, which require remote cloud processing to work. That means private user data goes to a remote server where it is analyzed outside of user control. Even if run locally, sufficiently advanced AI models will likely “understand” the contents of your device, including image content. Microsoft recently said, “Soon there will be a Copilot for everyone and for everything you do.”

Despite assurances of privacy from these companies, it’s not hard to imagine a future where AI agents probing our sensitive files in the name of assistance start phoning home to help customize the advertising experience. Eventually, government and law enforcement pressure in some regions could compromise user privacy on a massive scale. Journalists and human rights workers could become initial targets of this new form of automated surveillance.

Advertising is really just the surface of the problem, however. Confidentiality is antithetical to how AI/LLM functions, but this lack of confidentiality will also be hidden from us. Schneier has these details (hat tip from pixel envy for this additional info/link).

The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.

The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

So, if we are letting AI/LLM into our lives and into our work, we will also be letting AI/LLM use our “work” to make its own connections so that it can be effective in “helping” us.

At present, there are nearly no restrictions on what LLM/AI can do with the data you provide it (especially in the US). That license agreement with each new tech product presents a yes/no contract of adhesion. There is no negotiation about the substance; you either agree and get to use the software/hardware or disagree and be forbidden from using that software/hardware.

Which brings me to the accuracy issues. For actual legal work or any kind of work for which factual accuracy is necessary, AI/LLM cannot be trusted. No one should think that the first result from a google search is the complete answer. But, we know that google’s search engine – its AI – in general produced better search results than altavista. So, over time altavista declined in use as people switched their search efforts to google. As a result, most people reading this will not even know what altavista was.

For AI/LLM to succeed in general, it simply needs to be slightly better than what currently exists. And, in general, AI/LLM is being pushed into activities where there really is no current operations at all. All of this data that companies (and the government) have is too amorphous to do anything but supply the most basic of connections.

The excitement for AI/LLM right now is that it can create some order in an un-mapped wilderness. The accuracy that is needed for these tasks is little more than being better than no accuracy at all.

For example, imagine a government or company that wants to identify everyone in Wisconsin who has a lakeside cabin. Individually searching county property records for this information is a monumental task. Searching zillow.com is not much better. On the other hand, pulling together tidbits of data correlated with ownership of lakeside cabins could lead to a dataset that has better than 50% accuracy for a fraction of the cost. AI/LLM is being designed to do this kind of correlation.

This expansive use of correlation is what has tech companies (and numerous governments) so excited. At a fraction of the cost in personnel and time, they can gain access to information that is somewhat accurate.

And, there are, in practical terms, no limits on how this information is collected nor in how it is used. In the civil rights context, we know of massive databases being collected regarding our phones:

This problem is where the legal profession should be entering the picture. Rather than as a consumer of AI/LLM, legal professionals should be considering how to monitor and administer AI/LLM. Schneier explains:

If we want trustworthy AI, we need to require trustworthy AI controllers.

We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.

The legal profession should be leading the way to establish limits and guardrails for AI/LLM that starkly and obviously limit how these systems work and the expectations they can create among lawyers and the public at large. Until then, legal profession should have little to nothing to do with using AI/LLM in their legal practice. In other words, stay away from Creepio.

Explaining Mastodon for Wisconsin lawyers

Tags

With twitter deteriorating by the day, there is a need for social media options. And, one of those options is Mastodon and the Fediverse. Unlike other social media where a person signs up for an account at a central repository (think Facebook, twitter, or LinkedIn), there is no single repository or instance of Mastodon. Rather, Mastodon’s only real organization is set of communication protocols called ActivityPub so that posts from one instance can migrate to users across other Mastodon communities/servers, what is called the Fediverse.

“Decentralization is a big part of Mastodon’s DNA and is at the forefront of our mission.”

There is general help for explaining Mastodon and for setting up accounts via buffer, Wired, EFF, and Tidbits.

As Mastodon is open source software created by “geeks,” there is some initial complexity in light of all the available options. So, no single help or set-up guidance is going to work for all possible users. And, as a working lawyer who has been extremely busy the past few years — ahem, unemployment — I personally have only scratched the surface here of what is happening at Mastodon.

But, as the set-up guides above indicate, the first thing to do when setting up a Mastodon account is to select a community — a Fediverse — to join. Because of my interest in open source software, I signed up for an account with fosstodon, an open source software community.

Besides common interests, the key issues for deciding your Fediverse community are knowing how that organization will behave and how it is structured, including its code of conduct, its server rules, and how you can support your community (as Mastodon is open source, these Fediverse communities are run by volunteers, so having a way to contribute your financial support to the maintainers of the Fediverse is vital). As an example, here is the about page for the fosstodon Fediverse.

Once a Fediverse community is selected, you then create a user-id and password for that Fediverse. As a result, handles on mastodon are actually in two parts:

@username@domain

On the web, this handle turns into:

https://domain/@username

For example, my handle is @vforberger@fosstodon.org, so my web address for my Mastodon account is https://fosstodon.org/@vforberger. And, I login to my Mastodon account by going to https://fosstodon.org/.

Just because you connect to one Fediverse community, however, does not mean your connections or the information being shared is restricted to that one community/server. Because the Fediverse is a networked collection of many, many communities/servers, posts or toots, are shared across the entire Fediverse (unless a Fediverse community has intentionally decided NOT to share information with another, specific Fediverse community — more on that issue below).

So, even though I am on fosstodon, I see the toots from numerous other people outside of the fosstodon community and I even subscribe or follow accounts/people on other Fediverse communities.

There is no algorithm with Mastodon

Because there is no corporate entity in search of profits, there is no algorithm running behind the scenes filling your feed with posts. As a result, your feed turns entirely on the Fediverse server with which you signed up (your “local” news in your Fediverse community) and the accounts/people that you follow (your “home” timeline).

If you only follow a few people or accounts, then your home timeline will not have many posts. If you follow a lot of people or accounts or a person or two who is a prolific poster (I’m looking at you, @lisamelton@mastodon.social), then you home timeline may have more information than you can possibly follow as a normal human being.

So, this advice from @growlbeast@mastodon.art is spot on:

  • FOLLOW LOTS OF PEOPLE.
  • follow hashtags you have an interest in.
  • USE HASHTAGS when you post.

In place of initially searching for topics and people, you can find possible accounts to follow at this Fediverse directory.

Using Mastodon

On a computer, you use Mastodon usually through a web browser (but there are also now apps specific for Mastodon appearing for Windows, Apple, and Linux operating systems).

Just remember, because of the federated nature of Mastodon, you actually login into your specific service. For example, when I connect to my fosstodon account, I login at fosstodon.org.

There are official Mastodon apps on Android and iPhone. There are also numerous other apps that, frankly, offer even better experiences than the official smartphone apps (I use Ice Cubes, for instance). A search for mastodon andoid/iphone client app reviews will bring up numerous options for you. As others have already noted, Mastodon has become a playground for good computer app design. So, take advantage of these options.

Mastodon itself provides a list of Mastodon apps.

Mastodon controversies

Because you are part of a Fediverse server/community when you sign up, you are part of that community. If that community does not like what you post (i.e., there are complaints), you may find your account suspended temporarily or even permanently.

Note: Keep in mind that there is nothing preventing you from having multiple accounts and identities on the Fediverse. So, what may be problematic on one Fediverse may be entirely kosher for another Fediverse community. The trick is not to complain so much when one community may not like what you are posting about but to make sure your posts are right for the community/Fediverse from which you are tooting that particular information.

Because the Fediverse and Mastodon are open source, others are free to create their own Fediverse connections to the community. Facebook/Meta has hinted strongly at creating a twitter competitor based on Mastodon and the Fediverse, and rumors indicate a new app called Threads is slated to be released on July 6th of this year.

Note: Truth Social is another example of using the open source model of Mastodon to create a closed social network. Truth Social is essentially a single Fediverse community in disguise and closed off from communicating with other Fediverse communities.

Not everyone at Mastodon is happy with this Facebook/Meta connection to the Fediverse. Some Fediverse communities have vowed to disconnect from this Facebook/Meta community as soon as it tries to connect. Many are concerned that Facebook/Meta will simply use Fediverse accounts for data harvesting.

My own community — Fosstodon — has issued this sensible statement. In essence, it is a wait and see stance.

  • As a team, we will review what the service is capable of and what advantages/disadvantages such a service will bring to the Fediverse
  • We will then make a determination on whether we will defederate that service
  • We will NOT jump on the bandwagon, or partake in the rumour mill that seems to be plaguing the Fediverse at the moment

It’s important to say that neither myself or Mike like anything that Facebook stands for. Neither of us use it, and both of us go to great lengths to avoid it when browsing the web. So if this service introduces any issues that could negatively impact our users, we will defederate.

However, we don’t know what this thing is yet. Hell, we don’t even know if this thing will actually exist yet. So let’s just wait and see.

What if this thing ends up being a service that can allow you to communicate with your friends who still use Facebook, via the Fedi, in a privacy respecting manner. That would be pretty cool, I think; especially when you consider that one of the main concerns with new users on the Fedi is that they can’t find their friends.

Finally, several former Twitter folks have been creating an independent, twitter-like service called Bluesky. What will happen with this effort remains to be seen, however.

Final thoughts on the Fediverse and Mastodon

There are some basic pieces of information everyone should understand about Mastodon, First, there are no confidential communications on Mastodon. Everything is open and public, including messages from one user to another. So, for lawyers there is no way to communicate confidentially with clients or anyone else on Mastodon.

Second, organizations should create their own Fediverse communities. Given the federated nature of Mastodon, it seems natural that corporations and organizations will turn to the Fediverse to take control of their communities. The problems of Twitter demonstrate, if nothing else, that having your social media presence in the hands/whims of another entity is highly problematic both in the short term (having to fight misinformation) and even more so in the long term (losing any meaningful audience and participation by your “members”).

The state bar for Wisconsin, for example, could create a Fediverse server for its members (and so, making the choice of which Fediverse to sign up with an easy one). This Fediverse, then, could serve as a mechanism for the state’s legal community to discuss and debate the legal issues of this states as well as present news of issues as they develop, much as twitter once did.

Note: There is already a general lawyer community at @Esq.social. And, Lawprofblawg has been providing much needed laughs of late.

Third, general, mainstream news is still lacking on the Fediverse. A few news organizations/reporters have already taken to Mastodon — e.g., WGBH in Boston, Eric Gunn of Wisconsin Examiner, and Charlie Savage of NYTimes. But, the move from twitter has been spotty at best, and the initial push into Mastodon in late 2022 has not materialized into an active news-feed that had previously been occurring on twitter. I suspect there will be a bigger push into Mastodon as twitter further declines.

Fourth, account verification requires your control of a website. Rather, than having a central entity verify a person’s identity, verification is handled by posting some specific html code — a rel=me tag — in a website you control. This process makes sense because the identity and verification are simply based on what is already public accessible information. So, my fosstodon identity is verified because I have placed the needed tag on websites I control, something that a person who is NOT me presumably can not do.

This information is the Fediverse in a nutshell. It is very different from the social media before it, but it is the model of social media to come. Do some exploring when you have the chance and join the club.

Trying out whisper for transcribing audio

Tags

Artificial intelligence is all the rage nowadays, and Barton Gellman indicated how whisper.cpp presented fantastic accuracy.

So, I gave the app a run, and it is impressive. Unfortunately, directions for usage could be a bit better. Here are some helpful tips.

First, some directions for installing.

  1. Clone the git repository.

    $ git clone https://github.com/ggerganov/whisper.cpp

  2. Move into the newly created cloned whisper.

    $ cd whisper.cpp

  3. Compile the software.

    $ make [My systems are pretty vanilla, and there were no hitches with the compile. Kudos to those writing this software.]

  4. Next, install a transcription engine by running the download script for one of the engines. There are five to choose from: tiny, base, small, medium, and large. Below, the base engine is installed.

    $ cd models
    $ ./download-ggml-model.sh base.en [downloads the base engine]
    $ cd .. [to return to the whisper.cpp directory]

Whisper and the base engine is now installed and ready to go. The basic whisper command structure is:

usage: `./main [options] file0.wav file1.wav ...`

Useful/important options to consider using, in order of use, are:

  • -m MODEL [engine model to use]
  • -otxt [txt file output format]
  • -ocsv [csv file output format]
  • -of FILENAME [name of output file, without an extension]
  • -f WAV FILE [name of wav file to transcribe]

To see all available options, enter ./main -h. Here is the output from running the following command with a short file from one of my unemployment hearings (client name and phone number removed from the transcription).

$ ./main -m models/ggml-base.en.bin -otxt -of Client-test -f ClientSample.wav

whisper_init_from_file: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: f16           = 1
whisper_model_load: type          = 2
whisper_model_load: mem required  =  215.00 MB (+    6.00 MB per decoder)
whisper_model_load: kv self size  =    5.25 MB
whisper_model_load: kv cross size =   17.58 MB
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     =  140.60 MB
whisper_model_load: model size    =  140.54 MB

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 

main: processing 'ClientSample.wav' (4940975 samples, 308.8 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...

[00:00:00.000 --> 00:00:06.120]   This is a continuation of the hearing we were having difficulties with the connection.
[00:00:06.120 --> 00:00:11.280]   So ultimately I just decided to disconnect and connect all the parties again.
[00:00:11.280 --> 00:00:15.000]   I'm going to call the attorney first.
[00:00:15.000 --> 00:00:25.200]   Hello, this is administrative law judge Barbara Gerber.
[00:00:25.200 --> 00:00:27.320]   Do we have a better connection?
[00:00:27.320 --> 00:00:28.320]   It is better.
[00:00:28.320 --> 00:00:35.320]   All right, let me try to connect Miss CLIENT again.
[00:00:35.320 --> 00:00:55.320]   It's styling.
[00:00:55.320 --> 00:01:13.320]   Please leave your message for ###-###-####.
[00:01:13.320 --> 00:01:21.240]   Miss CLIENT, this is administrative law judge Barbara Gerber calling regarding your unfinished
[00:01:21.240 --> 00:01:23.200]   unemployment appeal hearing.
[00:01:23.200 --> 00:01:27.440]   I'm going to wait a couple of minutes and then I'll give you another call and hopefully we
[00:01:27.440 --> 00:01:29.440]   can make a connection at that time.
[00:01:29.440 --> 00:01:30.440]   Thank you.
[00:01:30.440 --> 00:01:37.240]   So Mr. Forberger, I'm going to give her about five minutes and see if she can figure out
[00:01:37.240 --> 00:01:46.360]   either a different phone or location and get to some spot where we can finish the hearing.
[00:01:46.360 --> 00:01:50.160]   All right.
[00:01:50.160 --> 00:01:59.840]   Attorney for Berger.
[00:01:59.840 --> 00:02:03.240]   Thank you.
[00:02:03.240 --> 00:02:13.240]   [BLANK_AUDIO]
[00:02:13.240 --> 00:02:23.240]   [BLANK_AUDIO]
[00:02:23.240 --> 00:02:33.240]   [BLANK_AUDIO]
[00:02:33.240 --> 00:02:43.240]   [BLANK_AUDIO]
[00:02:43.240 --> 00:02:53.240]   [BLANK_AUDIO]
[00:02:53.240 --> 00:03:03.240]   [BLANK_AUDIO]
[00:03:03.240 --> 00:03:13.240]   [BLANK_AUDIO]
[00:03:13.240 --> 00:03:23.240]   [BLANK_AUDIO]
[00:03:23.240 --> 00:03:33.240]   [BLANK_AUDIO]
[00:03:33.240 --> 00:03:43.240]   [BLANK_AUDIO]
[00:03:43.240 --> 00:03:53.240]   [BLANK_AUDIO]
[00:03:53.240 --> 00:04:03.240]   [BLANK_AUDIO]
[00:04:03.240 --> 00:04:13.240]   [BLANK_AUDIO]
[00:04:13.240 --> 00:04:23.240]   [BLANK_AUDIO]
[00:04:23.240 --> 00:04:33.240]   [BLANK_AUDIO]
[00:04:33.240 --> 00:04:43.240]   [BLANK_AUDIO]
[00:04:43.240 --> 00:04:53.240]   [BLANK_AUDIO]
[00:04:53.240 --> 00:05:03.240]   [BLANK_AUDIO]
[00:05:03.240 --> 00:05:13.240]   [BLANK_AUDIO]

output_txt: saving output to 'Client-test.txt'

whisper_print_timings:     fallbacks =   4 p /   0 h
whisper_print_timings:     load time =   230.18 ms
whisper_print_timings:      mel time =  2945.69 ms
whisper_print_timings:   sample time =   511.61 ms /   564 runs (    0.91 ms per run)
whisper_print_timings:   encode time = 63995.05 ms /    26 runs ( 2461.35 ms per run)
whisper_print_timings:   decode time = 11700.60 ms /   548 runs (   21.35 ms per run)
whisper_print_timings:    total time = 79435.22 ms

As noted in this output, a txt file called Client-test.txt with this transcription was also produced. A test with the same WAV file using the medium engine produced this text (time stamps removed).

This is a continuation of the hearing.
We were having difficulties with the connection, so ultimately I just decided to disconnect
and connect all the parties again.
I'm going to call the attorney first.
Hello, this is Administrative Law Judge Barbara Gerber.
Do we have a better connection?
It is better.
All right.
So let me try to connect Ms. CLIENT again.
It's dialing.
Please leave your message for ###-###-####.
Ms. CLIENT, this is Administrative Law Judge Barbara Gerber calling regarding your unfinished
unemployment appeals hearing.
I'm going to wait a couple of minutes and then I'll give you another call and hopefully
we can make a connection at that time.
Thank you.
So Mr. Forberger, I'm going to give her about five minutes and see if she can figure out
either a different phone or location and get to some spot where we can finish the hearing.
All right?
Attorney Forberger?
Attorney Forberger?
Yes.
Okay.
Okay.
Okay.
Okay.
Okay.
Okay.
Okay.
Okay.
Okay.

This transcription is pretty good. But, it is still a long ways from replacing a court reporter.

Menu on wrong side of the screen in Vivaldi

Tags

In recent versions of Vivaldi, the menu has started appearing on the right side of the window.

Vivaldi in Xubuntu  with menu on right side of screen

I am a stickler for usability, and so I want a menu showing. I also follow the original human interface design guidelines of trying to have window controls on the upper-left corner of windows/screens.

Any insights into how to fix this display bug in order to get the menu back on the left side of the window?

This bug is showing up on all my computers. Here are the basics:

System:
Kernel: 5.17.5-76051705-generic x86_64 bits: 64 
Desktop: Xfce 4.14.2 Distro: Ubuntu 20.04.4 LTS (Focal Fossa) 

Not all is hunky dory with Linux, but it is doing as well as others

Tags

,

Dedoimedo has an excellent commentary of the state of the Linux desktop.

He notes that usability has plateaued in many ways. I agree. The basic functionality and speed I had with Xubuntu 14.04 (Trusty Tahr) was stellar. Now running Xubuntu 20.04 (Focal Fossa) on both newer and faster desktop and laptop computers, I have had problems with graphics cards, samba networking is a bust that I work around, and connecting my iPhone for file transfers is hit or miss.

Yes, the world is not standing still. Linux systems like Xubuntu are actually undergoing massive changes through updates to the xfce window manager while still trying to retain the same general look and functionality. That kind of work is much harder that simply creating something new (like restoring an old house with good bones than building a new house on an empty lot). But, that hard work does not mean longstanding defects should remain. A remodeling job for a house is still incomplete if the electrical wiring is exposed or the finish carpentry is not in place.

Note: In contrast to Dedoimeda’s review of Xubuntu 20.04, the limitations with the current version are not a problem for how I have set up my computers. And, I value the hardware control and compatibility I get with this version of Xubuntu. For instance, whereas Kubuntu has no obvious method for adjusting sound inputs and hardware, I have obvious access through Xubuntu’s PulseAudio plugin on my panel.

The splintering that occurs in Linux systems with new distributions and spin offs popping up all over the place — a major factor in Dedoimeda’s criticism — is surely an important reason for why the edges are more frayed today than they were a few years ago. Some self-discipline and focus is needed in the world of Linux, just as self-discipline and focus is needed in most of life.

An example of this concentrated focus and so deserving of praise is LibreOffice. On my setup without the ribbon but with traditional menus and one toolbar customized with the formatting tools I use, LibreOffice has been a joy to use with the newer versions (currently running v.6.4.6.2).

LibreOffice word processor

Finally, it should also be pointed out that usability has seemingly plateaued on other operating systems as well.

I still have a Mac for the family computer, and more and more software is broken on the current version — Catalina/10.15 — without much if any additional benefit. Snow Leopard/10.6 was a model of stability and design, and in general the Mac has yet to repeat that performance.

My daughters want to game, and so they both now have Windows 10 desktops. Certainly more software is available on Windows 10 than either MacOS or Linux systems. But, Windows 10 remains a complete kludge in many ways, with both new and old (aka Windows 7) design elements remaining throughout. For instance, there are Settings but also many vital settings still must be set via the Control Panel. Why? How can these dual settings systems still exist?

Issues with the new Insync3

Tags

Insync has undergone a major re-write of the underlying sync frameworks from version 1.5.x to 3.0.x.

Integration with file managers like thunar is a work-in-progress with this new version. More troubling is a major change in sync behavior with the series 3 version. While the new version has many more syncing options, there is a significant change that is NOT adequately explained.

Previously, all files in the sync folder were synced across google drive and the computers connected via Insync UNLESS you selected parts of the folder/directory for a manual or no sync.

With an upgrade to version 3, however, all files on a computer are synced with google drive, but new files created on one computer are no longer added to other computers connected to google drive via Insync. As a result, folders across computers will get out of sync with each other, which kinda defeats the whole purpose of syncing software for most folks.

Here is what you will see when examining an un-synced folder from within Insync:

Un-synced folder contents

In this Employee folder, there are numerous files that are NOT synced on the particular computer on which Insync is running. These files were added on another computer and synced to google drive. But, the files are NOT synced automatically to other computers unless I now tell Insync that I want these files synced with this computer.

To fix this problem, on each computer you need to go to that folder from within Insync and then select the cloud selective sync option:

Selecting the selective sync option

You then need to select the folder (or file) you want to sync on that computer:

Folder to sync selected

Then, click on the green Sync button, and the contents of that folder and all sub-folders will be synced on that specific computer:

Folders synced

This process needs to be done on each computer and for every folder that needs to be synced across those computers.

Update (16 Sept. 2020): The security key for the Insync PPA expired this month. The Insync forums have the solution:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys ACCAF35C

Updates and upgrades should proceed normally after entering this terminal command.

Geekbench scores

Tags

, ,

Here are some comparative geekbench scores between a 2011 MacBook Pro, a thinkcentre desktop, and a 2018 Galago Pro from System76:

Single-Core scores
MacBook Pro  desktop  Galago Pro
2935         2186     4209

Multiple-Core scores
MacBook Pro  desktop  Galago Pro
6282         3493     11636

The MacBook Pro is a 13-inch Early 2011 model with 8 MB of memory and an Intel Core i7-2620M running at 2.7 GHz and an SSD replacing the original hard drive.

The desktop is a Lenovo 7373BC7 (aka a thinkcentre m58-7373) with 4 MB of memory and an Intel Core 2 Duo E8400 running at 3.0 GHz, a basic Nvidia graphics card, and an SSD for the boot drive.

The Galago Pro (previously reviewed here) has 8 MB of memory, an Intel Core i5-8250U running at 3.40 GHz, and a fast SSD for the boot drive.