Skip to content

Sourcery refactored main branch#1

Open
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main
Open

Sourcery refactored main branch#1
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main

Conversation

@sourcery-ai
Copy link

@sourcery-ai sourcery-ai bot commented Oct 13, 2022

Branch main refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the main branch, then run:

git fetch origin sourcery/main
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@height
Copy link

height bot commented Oct 13, 2022

Link Height tasks by mentioning a task ID in the pull request title or commit messages, or description and comments with the keyword link (e.g. "Link T-123").

💡Tip: You can also use "Close T-X" to automatically close a task when the pull request is merged.

@sourcery-ai sourcery-ai bot requested a review from lloydwoodham October 13, 2022 08:57
Comment on lines -23 to +26

df = pd.DataFrame({'BR': B.T[0],
'BT': B.T[1],
'BN': B.T[2],
'|B|': norm}, index = time)

return df

return pd.DataFrame(
{'BR': B.T[0], 'BT': B.T[1], 'BN': B.T[2], '|B|': norm}, index=time
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function cdf2df refactored with the following changes:

Comment on lines -131 to +128
if isinstance(arr[0], u.Quantity):
val_to_subtract = 360 * u.deg
else:
val_to_subtract = 360
val_to_subtract = 360 * u.deg if isinstance(arr[0], u.Quantity) else 360
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function unwrap_lons refactored with the following changes:

num = np.around(num, decimals=0)
arr = np.linspace(val1, val2, int(num)+1)
return arr
return np.linspace(val1, val2, int(num)+1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_linspace_arr refactored with the following changes:

Comment on lines -88 to +87
polar_img_arr = ndimage.map_coordinates(img_arr, polar_inds, order=1, cval=cval)
return polar_img_arr
return ndimage.map_coordinates(img_arr, polar_inds, order=1, cval=cval)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function cart_to_polar_v1 refactored with the following changes:

Comment on lines -135 to +133
polar_img_arr = ndimage.map_coordinates(img_arr, polar_inds, order=1, cval=cval)
return polar_img_arr
return ndimage.map_coordinates(img_arr, polar_inds, order=1, cval=cval)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function cart_to_polar refactored with the following changes:

Comment on lines -44 to +41
y = int(date[0:4])
y = int(date[:4])
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function download_tswf refactored with the following changes:

This removes the following comments ( why? ):

# Dowloading with wget

Comment on lines -122 to +115
E = np.dot(M, ww[0:2, :]) * 1e3 # transformation into SRF (Y-Z) in (mV/m)
return E
return np.dot(M, ww[0:2, :]) * 1e3
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function convert_to_SRF refactored with the following changes:

This removes the following comments ( why? ):

# transformation into SRF (Y-Z) in (mV/m)

Comment on lines -160 to +152
if sr > 300000:
fmax = 200000
else:
fmax = 100000
fmax = 200000 if sr > 300000 else 100000
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function plot_spectrum refactored with the following changes:

else:
norm=None

norm = colors.LogNorm(vmin=vmin, vmax=vmax) if lognorm and vmin>0. else None
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function spectro_plot refactored with the following changes:

Comment on lines -65 to +79

w1 = [i for i, j in enumerate(dets) if j == '1']
w2 = [i for i, j in enumerate(dets) if j == '2']
w3 = [i for i, j in enumerate(dets) if j == '3']
w4 = [i for i, j in enumerate(dets) if j == '4']


dm, loc = min((dm, loc) for (loc, dm) in enumerate([len(w1),len(w2),len(w3),len(w4)]))

mos_inx=np.zeros([dm, 4])
t=Time(times)
#round times to nearest 20 minutes (gives better performance)
mj=t.mjd
mj=np.around((mj-mj[0])*1200)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found the following improvement in Function match_files:

#create outputs
n=mos_inx.shape

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found the following improvement in Function shi_mov_cube:

Comment on lines -304 to +404
s=data.shape
#remove nan values
data[~np.isfinite(data)]=1
inx=0
s=data.shape
#remove nan values
data[~np.isfinite(data)]=1
inx=0

#if needed, establish bytescale
if vrange[0]==vrange[1]:
vrange=scale_cube(data)

#if desired by user, check for excessive pixels in each individual frame for bytescaling
if qcheck == True:
hdrs2=[]
print('Checking individual frames for quality...')
for i in range(s[0]):
ba=np.logical_and(data[i,:,:] <= vrange[1], data[i,:,:] >= vrange[0])
cnt=np.count_nonzero(np.where(ba)[0])
if cnt/(s[1]*s[2]) > .7:
if inx==0:
data2=data[i,:,:]
hdrs2.append(hdrs[i])
inx+=1
else:
data2=np.dstack((data2, data[i,:,:]))
hdrs2.append(hdrs[i])
inx+=1

data=np.transpose(data2, (2,0,1))
hdrs=hdrs2
print(inx, ' of ', s[0], ' frames used')

#if needed, establish bytescale
if vrange[0]==vrange[1]:
vrange=scale_cube(data)

#if desired by user, check for excessive pixels in each individual frame for bytescaling
if qcheck == True:
hdrs2=[]
print('Checking individual frames for quality...')
for i in range(s[0]):
ba=np.logical_and(data[i,:,:] <= vrange[1], data[i,:,:] >= vrange[0])
cnt=np.count_nonzero(np.where(ba)[0])
if cnt/(s[1]*s[2]) > .7:
if inx==0:
data2=data[i,:,:]
hdrs2.append(hdrs[i])
inx+=1
else:
data2=np.dstack((data2, data[i,:,:]))
hdrs2.append(hdrs[i])
inx+=1

data=np.transpose(data2, (2,0,1))
hdrs=hdrs2
print(inx, ' of ', s[0], ' frames used')

#if desired, write processed output fits files
if writefits == True:
print('Writing FITS files...')
for i in range(s[0]):
times=hdrs[i]['date-avg']
x=hdrs[i]['filename'].find('solohi-')
pfilename=hdrs[i]['filename'][0:x+7]+'mft_'+times[0:4]+times[5:7]+times[8:13]+times[14:16]+times[17:19]+'_V00.fits'
hdrs[i]['filename']=pfilename
outdat=data[i,:,:].copy()
if scale == True:
outdat=(outdat-vrange[0])/(vrange[1]-vrange[0])*255
outdat[outdat < 0]=0
outdat[outdat > 255]=255
htmp=fits.PrimaryHDU(outdat.astype(float), header=hdrs[i])
htmp.writeto(savepath+pfilename, overwrite=True)
#set up the animation window
print('Launching animation...')
fig=plt.figure()
ax=plt.subplot()
if grid==False:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray')
plt.xticks([])
plt.yticks([])
plt.axis('off')
elif proj==True:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray', extent=[elongation[0], elongation[1], latitude[0], latitude[1]])
ax.set_xlabel('HPC Elongation (Deg)')
ax.set_ylabel('HPC Latitude (Deg)')
ax.grid(color='white')
elif wcscor==True:
wcs=WCS(hdrs[0])
ax=plt.subplot(projection=wcs)
im=ax.imshow(data[0,:,:], vmin=vrange[0], vmax=vrange[1], cmap='gray', origin='lower')
ax.grid(color='white')
ax.coords[0].set_format_unit(u.deg)
ax.coords[1].set_format_unit(u.deg)
else:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray')
ax.grid(color='white')

def init():
if hdrs != '':
times=hdrs[0]['date-avg']
ax.set_title(times)
else:
im.set_title='Frame: 0'
im.set_data(data[0,:,:])
return im, ax

def animate(i, im=im, ax=ax):
if hdrs != '':
print('Rendering frame ', i+1, 'of', s[0])
times=hdrs[i]['date-avg']

if grid==True and wcscor==True and redrawwcs==True:
wcs=WCS(hdrs[i])
ax=plt.subplot(projection=wcs)
im=ax.imshow(data[i,:,:], vmin=vrange[0], vmax=vrange[1], cmap='gray', origin='lower')
ax.grid(color='white')
ax.coords[0].set_format_unit(u.deg)
ax.coords[1].set_format_unit(u.deg)
else:
im.set_data(data[i,:,:])
ax.set_title(times)
else:
ax.set_title('Frame: '+str(i))
#animate cube and save the output. ffmpeg arugments can be changed as needed
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=data.shape[0], interval=10, blit=True)
anim.save(moviepath+moviename, writer=animation.FFMpegWriter(fps=20, extra_args=["-crf", "25", "-s", "864x576", "-vcodec", "libx264"]), dpi=250)
if writefits == True:
print('Writing FITS files...')
for i in range(s[0]):
times=hdrs[i]['date-avg']
x=hdrs[i]['filename'].find('solohi-')
pfilename = (hdrs[i]['filename'][:x + 7] + 'mft_' + times[:4] + times[5:7] +
times[8:13] + times[14:16] + times[17:19] + '_V00.fits')
hdrs[i]['filename']=pfilename
outdat=data[i,:,:].copy()
if scale == True:
outdat=(outdat-vrange[0])/(vrange[1]-vrange[0])*255
outdat[outdat < 0]=0
outdat[outdat > 255]=255
htmp=fits.PrimaryHDU(outdat.astype(float), header=hdrs[i])
htmp.writeto(savepath+pfilename, overwrite=True)
#set up the animation window
print('Launching animation...')
fig=plt.figure()
ax=plt.subplot()
if grid==False:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray')
plt.xticks([])
plt.yticks([])
plt.axis('off')
elif proj==True:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray', extent=[elongation[0], elongation[1], latitude[0], latitude[1]])
ax.set_xlabel('HPC Elongation (Deg)')
ax.set_ylabel('HPC Latitude (Deg)')
ax.grid(color='white')
elif wcscor==True:
wcs=WCS(hdrs[0])
ax=plt.subplot(projection=wcs)
im=ax.imshow(data[0,:,:], vmin=vrange[0], vmax=vrange[1], cmap='gray', origin='lower')
ax.grid(color='white')
ax.coords[0].set_format_unit(u.deg)
ax.coords[1].set_format_unit(u.deg)
else:
im=ax.imshow(data[0,:,:], origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray')
ax.grid(color='white')

def init():
if hdrs != '':
times=hdrs[0]['date-avg']
ax.set_title(times)
else:
im.set_title='Frame: 0'
im.set_data(data[0,:,:])
return im, ax

def animate(i, im=im, ax=ax):
if hdrs != '':
print('Rendering frame ', i+1, 'of', s[0])
times=hdrs[i]['date-avg']

if grid==True and wcscor==True and redrawwcs==True:
wcs=WCS(hdrs[i])
ax=plt.subplot(projection=wcs)
im=ax.imshow(data[i,:,:], vmin=vrange[0], vmax=vrange[1], cmap='gray', origin='lower')
ax.grid(color='white')
ax.coords[0].set_format_unit(u.deg)
ax.coords[1].set_format_unit(u.deg)
else:
im.set_data(data[i,:,:])
ax.set_title(times)
else:
ax.set_title(f'Frame: {str(i)}')

#animate cube and save the output. ffmpeg arugments can be changed as needed
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=data.shape[0], interval=10, blit=True)
anim.save(moviepath+moviename, writer=animation.FFMpegWriter(fps=20, extra_args=["-crf", "25", "-s", "864x576", "-vcodec", "libx264"]), dpi=250)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function run_movie refactored with the following changes:

Comment on lines -469 to +471

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found the following improvement in Function jframe:

Comment on lines -524 to +584
if vrange[0]==vrange[1]:
vrange=scale_cube(data)
s=jmov.shape
#determine the time span of the whole data cube
t1=Time(hdrs[0]['date-avg']).mjd
t2=Time(hdrs[s[0]-1]['date-avg']).mjd
if outx/s[0] < 10:
outx=s[0]*10
dsuns=np.zeros(s[0])
#convert time span to mintes
tspan=(t2-t1)*24*60
#get the pixels per minute in the x direction
ppm=(outx/tspan)
tcur=t1
inx=0
jimg=np.zeros([1000, outx])
for i in range(s[0]):
#determine pixel space between files
if i < s[0]-1:
tnext=Time(hdrs[i+1]['date-avg']).mjd
pixw=int(((tnext-tcur)*24*60)*ppm)
tcur=tnext
else:
pixw=outx-1-inx
#get the spacecraft distance for each file
dsuns[i]=hdrs[i]['dsun_obs']
#fill the output image with the signal from the current file at the right columns
jimg[:,inx:inx+pixw]=np.transpose(np.tile(np.median(jmov[i,:,int((pa-45)/90*1000-width/2):int((pa-45)/90*1000+width/2)], axis=1), (pixw,1)))
#increase the index of the current file
inx+=pixw
dsun_obs=np.mean(dsuns)
titleu='m'
#set the y-axis data of the current file
extent=[0, tspan/60, 5, 45]
if vrange[0]==vrange[1]:
vrange=scale_cube(data)
s=jmov.shape
#determine the time span of the whole data cube
t1=Time(hdrs[0]['date-avg']).mjd
t2=Time(hdrs[s[0]-1]['date-avg']).mjd
if outx/s[0] < 10:
outx=s[0]*10
dsuns=np.zeros(s[0])
#convert time span to mintes
tspan=(t2-t1)*24*60
#get the pixels per minute in the x direction
ppm=(outx/tspan)
tcur=t1
inx=0
jimg=np.zeros([1000, outx])
for i in range(s[0]):
#determine pixel space between files
if i < s[0]-1:
tnext=Time(hdrs[i+1]['date-avg']).mjd
pixw=int(((tnext-tcur)*24*60)*ppm)
tcur=tnext
else:
pixw=outx-1-inx
#get the spacecraft distance for each file
dsuns[i]=hdrs[i]['dsun_obs']
#fill the output image with the signal from the current file at the right columns
jimg[:,inx:inx+pixw]=np.transpose(np.tile(np.median(jmov[i,:,int((pa-45)/90*1000-width/2):int((pa-45)/90*1000+width/2)], axis=1), (pixw,1)))
#increase the index of the current file
inx+=pixw
dsun_obs=np.mean(dsuns)
titleu='m'
#set the y-axis data of the current file
extent=[0, tspan/60, 5, 45]
#if unit is Rs, adjust dsun_obs anad convert elongations
if unit=='Rs' or unit=='Rsun' or unit=='RS' or unit=='RSUN':
dsun_obs=dsun_obs/695700000
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu=unit
#if unit is au, adjust dsun_obs anad convert elongations
elif unit=='AU' or unit=='au':
dsun_obs=dsun_obs/149597870700
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu=unit
#if unit is km, adjust dsun_obs anad convert elongations
elif unit=='KM' or unit=='km' or unit=='Km':
dsun_obs=dsun_obs/1000
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu='km'
#if unit isn't recognized keep it in degrees
elif unit!= 'Deg' or unit != 'Degrees' or unit != 'DEG' or unit !='DEGREES':
print('Invalid unit, using elongation (degrees)')
#display the map
ax2=plt.subplot()
ax2.imshow(jimg, origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray', extent=extent, aspect='auto')
#Set up the axes
ax2.set_xlabel('Hrs after Start Time: '+hdrs[0]['date-avg'])
ax2.set_ylabel(unit)
ax2.set_title('PA='+str(pa)+'\u00B0 Mean S/C Dist ='+str(round(dsun_obs, 3))+' '+titleu)
#return the image
return jimg
if unit in ['Rs', 'Rsun', 'RS', 'RSUN']:
dsun_obs=dsun_obs/695700000
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu=unit
elif unit in ['AU', 'au']:
dsun_obs=dsun_obs/149597870700
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu=unit
elif unit in ['KM', 'km', 'Km']:
dsun_obs=dsun_obs/1000
extent=[0, tspan/60, dsun_obs*math.sin(math.radians(5)), dsun_obs*math.sin(math.radians(45))]
titleu='km'
else:
print('Invalid unit, using elongation (degrees)')
#display the map
ax2=plt.subplot()
ax2.imshow(jimg, origin='lower', vmin=vrange[0], vmax=vrange[1], cmap='gray', extent=extent, aspect='auto')
#Set up the axes
ax2.set_xlabel('Hrs after Start Time: '+hdrs[0]['date-avg'])
ax2.set_ylabel(unit)
ax2.set_title(f'PA={str(pa)}' + '\u00B0 Mean S/C Dist =' +
str(round(dsun_obs, 3)) + ' ' + titleu)
#return the image
return jimg
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function make_jmap refactored with the following changes:

This removes the following comments ( why? ):

#if unit isn't recognized keep it in degrees
#if unit is km, adjust dsun_obs anad convert elongations
#if unit is au, adjust dsun_obs anad convert elongations

@sourcery-ai
Copy link
Author

sourcery-ai bot commented Oct 13, 2022

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.04%.

Quality metrics Before After Change
Complexity 7.38 ⭐ 7.08 ⭐ -0.30 👍
Method Length 212.57 ⛔ 211.07 ⛔ -1.50 👍
Working memory 12.67 😞 12.70 😞 0.03 👎
Quality 42.74% 😞 42.70% 😞 -0.04% 👎
Other metrics Before After Change
Lines 2056 2022 -34
Changed files Quality Before Quality After Quality Change
MAG_tutorial/analysis_helpers.py 71.94% 🙂 72.42% 🙂 0.48% 👍
Metis_tutorial/metis_aux_lib.py 55.79% 🙂 55.00% 🙂 -0.79% 👎
PHI_tutorial/belfast_helper.py 38.09% 😞 38.38% 😞 0.29% 👍
RPW_tutorial/tds_helpers.py 54.63% 🙂 55.73% 🙂 1.10% 👍
SWA_tutorial/PAS-demo/spectro.py 33.23% 😞 33.34% 😞 0.11% 👍
SolO-HI_tutorial/shifits2grid.py 32.63% 😞 33.13% 😞 0.50% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
SolO-HI_tutorial/shifits2grid.py run_movie 33 ⛔ 876 ⛔ 12.15% ⛔ Refactor to reduce nesting. Try splitting into smaller methods
SolO-HI_tutorial/shifits2grid.py match_files 16 🙂 499 ⛔ 16 ⛔ 26.36% 😞 Try splitting into smaller methods. Extract out complex expressions
SolO-HI_tutorial/shifits2grid.py make_jmap 10 🙂 479 ⛔ 21 ⛔ 27.77% 😞 Try splitting into smaller methods. Extract out complex expressions
PHI_tutorial/belfast_helper.py plot_hrt_stokes 9 🙂 595 ⛔ 19 ⛔ 29.52% 😞 Try splitting into smaller methods. Extract out complex expressions
PHI_tutorial/belfast_helper.py plot_fdt_stokes 8 ⭐ 521 ⛔ 19 ⛔ 30.67% 😞 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants