Search Documents
Search Again
Search Again
Refine Search
Refine Search
-
Institute of Metals Division - Melting and Freezing (Institute of Metals Lecture, 1954)By B. Chalmers
THE practical importance of the phenomena of melting and freezing must have been recognized for a very long time. The difference between ice and water, for example, has had a profound influence on the history of mankind and the evolution of society. The possibility of melting a metal and allowing it to freeze in a mold of chosen shape has been an essential ingredient in our mastery of the art of shaping metals, and therefore in the evolution of the: machine age in which we find ourselves. The importance of melting and freezing, as applied to metals and alloys, has been so great, in fact, that empirical solutions have been found for the multitude of practical problems that have arisen. This approach has been so successful that relatively little attention has been directed to arriving at an understanding of the fundamentals of the processes. But metallurgy has come to a stage at which we may expect that some, at least, of the more complex problems that have not yet been solved (or perhaps even recognized) may be handled more effectively by scientific study, theoretical understanding, and logical experimentation than by trial and error. In this lecture, therefore, I propose to describe in outline what I think really happens when a metal freezes. In doing so I hope to explain many of the phenomena which have been observed, and in particular to account for the structures that are obtained in actual ingots and castings. The basic problem, to which this lecture represents a tentative partial answer, is this: a mass of metal, containing known proportions of various elements, is melted, heated to a given temperature, and then allowed to freeze under specified conditions. What will be the "structure" of the resulting metal? The term structure includes: 1—crystal size, shape, and orientations, 2—distribution of chemical elements, and 3-—shape, including cracks, cavities, pores, etc. The Solid-Liquid Interface We will first consider what takes place if a single crystal of a metal in the form of a rod is heated, not uniformly, but so that one end is hotter than the other. If this heating process is continued long enough, the hotter end will eventually melt; we will suppose that the rod is in a containing vessel so that the molten metal does not run away, Fig. 1. When some of the metal has melted, we have some solid, some liquid, and an interface or surface of contact between them. If the source of heat is now removed, the interface will move so that some of the liquid freezes, and if the supply of heat is suitably adjusted the interface will remain at rest. This very simple arrangement allows us to study the basic processes of melting and freezing, and if we fully understand this simple case, we may be able to account for what takes place under practical conditions where the heat does not all flow in the same direction, and where the heat flow is determined not by a controllable source of heat but by the heat capacity and temperature of metal and mold, and by the heat loss from the mold surface. The solid-liquid interface is evidently the region of the greatest interest to us; on one side of it there is crystalline solid, and on the other, liquid. In the solid, each atom has a well defined position, around which it vibrates as a result of thermal agitation. It only leaves this position in the relatively rare event of a "diffusion jump." The liquid is much less systematically organized. The atoms are about as far from their neighbors as in the solid, but the arrangement is much less systematic and is continuously changing. The solid and the liquid are represented diagrammatically in Fig. 2. The average energy of the atoms in the liquid is greater than in the solid by an amount that corresponds to the latent heat of fusion, i.e., the amount of heat that has to be supplied to convert unit mass of solid into liquid at the same temperature. The Two Processes As has recently been shown by Jackson and Chalmers,3 many of the features of the processes of freezing and melting can be understood if it is assumed that a continuous and rapid interchange of atoms between solid and liquid always takes place at a solid-liquid interface." It is necessary to con- sider two distinct processes, that of melting, in which atoms leave the surface of the solid and become part of the liquid, and the converse process,
Jan 1, 1955
-
Institute of Metals Division - Densities of Some Low-Melting Cerium AlloysBy L. A. Geoffrion, R. H. Perkins, J. C. Biery
Densities of cerium metal and several lour-melting binary cerium alloys were measured over the range 25° to 800°C. A rolumeter, using NaK as working fluid, was used to obtain the data. The cerium, Ce-Co, Ce-Ni, and Ce-Cu alloys all exhibited an increase in density on melting, while a Ce-Mn alloy expanded on melting. FOR the proper design of a nuclear reactor, the change in density of the fuel with temperature must be known. This is especially important in a system utilizing molten fuel, such as LAMPRE (Los Alamos Molten Plutonium Reactor Experiment), since a relatively large change in density usually occurs during the solid-liquid transition. The fuel for LAMPRE is a Pu-2.5 wt pct Fe alloy with a melting temperature of 410°C. However, limitations in reactor design with this fuel have led to consideration of other plutonium-containing alloys for use in future generations of this type of reactor. Several ternary alloys containing plutonium and cerium as two components have satisfactorily low melting points. The system that at the present time appears to be most acceptable is Pu-Ce-Co; it exhibits little change in melting temperature with wide variation in plutonium concentration. Other alloys that have received some consideration contain nickel, copper, and manganese as the third constituent. The proposed fuel alloys are difficult to handle experimentally in the 25" to 800°C temperature range since they oxidize readily, react with many solvents, and contain a poisonous fissionable material. In addition, in this temperature range the alloys pass through the solid-liquid transition. Several techniques are available for measuring the densities and volume coefficients of expansion of solids or liquids. However, the only apparatus that appears suitable for measuring expansion coefficients over this temperature range and through the phase transition is a volumeter. In a volumeter, the indicating medium must be essentially inert to and insoluble in the material being studied. It must also possess a low vapor pressure over the operating temperature range, and its coefficient of expansion must be accurately known. One material that is satisfactory in nearly all of these respects is the alloy Na-78 wt pct K, which melts at -10°C and has a vapor pressure of 860 mm Hg at 800°C. This relatively high vapor pressure at 800°C requires an overpressure of an inert gas to prevent boiling. While a volumeter is capable of determining accurately the volume coefficients of expansion of materials, it cannot be used for absolute density measurements. Therefore, a density determination at a known temperature must be coupled with the volumeter measurements to give all the desired data. The weight-loss technique using immersion in bromobenzene at room temperature proved to be satisfactory. The preliminary work that was done on this experimental program involved developing and calibrating the equipment, and measuring the densities and volume coefficients of expansion of cerium and some low-melting binary cerium alloys. The complications that are caused with the introduction of plutonium into the system were avoided until the equipment was proved to be satisfactory and until experience was gained in its operation. It is this first phase of the experimental work that is described in this report. DESCRIPTION OF EQUIPMENT AND OPERATING PROCEDURE The NaK volumeter is shown schematically in Fig. 1. Basically, the equipment consists of two weld-sealed stainless-steel containers of nearly identical volume. One container holds a tantalum crucible and the specimen being measured; the other contains a tantalum crucible and a tantalum specimen used as a control reference material. To avoid temperature gradients, the bombs are located in a copper block inside the furnace. Stainless-steel capillaries of equal length and volume connect each stainless-steel container to a glass viewing capillary. The stainless-steel containers, stainless-steel capillaries, and a portion of the glass capillaries are filled with NaK (22 wt pct Na-78 wt pct K). The NaK/gas interface in the glass capillary is viewed with a cathetometer which is accurate to *0.5 mm. The cathetometer readings are used to calculate the volume changes of the samples during a run. This volumeter is basically the same as that described by F. Knight in Plutonium 1960. 2 However, changes in equipment design and operating procedure were made to eliminate some major operating difficulties. These changes are summarized below. 1) In filling the manometer with NaK, gas was frequently entrained in the system. Evacuation of the system before filling failed to eliminate the en-
Jan 1, 1965
-
Part V – May 1969 - Papers - Rapid Quenching Drop SmasherBy W. J. Maraman, D. R. Harbur, J. W. Anderson
A device for rapidly quenching liquid metals into thin platelets has been developed at the Los Alamos Scientific Laboratory. This rapid quenching equipment is built around the technique of catching a molten drop of metal between a rapidly closing plate and a stationary plate. The design and operation of this unit are described. The closing speed of the smasher plate at impact is 12.6 ft per sec. The quenching rate for this device is controlled by the interface resistance between the plates and the platelet, and is dependent upon the heat content and density of the material being quenched. The initial quenching rate down to the freezing point of the platelet material is lo5º to 106ºC per sec. After an isothermal delay, which is poportional to the heat of fusion of the platelet material, the final cooling rate down to the temperature of the smaslier plates is l04ºto 105cº per sec. RAPID heating of metals by capacitor discharge and other methods has provided the metallurgist with a useful tool for probing into the kinetics of phase changes and the many nonequilibrium phenomena which occur during rapid temperature changes. Equally interesting studies can also be made on metals and alloys which are rapidly cooled from the liquid state.' Studies in this field have been limited, however, because the rates at which metals could be cooled were many orders of magnitude slower than the rates possible for heating. In recent years many new laboratory methods have been developed to rapidly cool metals from the liquid state to ambient temperature and below.2"4 All of these methods involve spreading a liquid drop of metal into a thin foil in a very short time. The methods developed have varied from ejecting a drop of molten metal at the inside surface of a rotating cylinder or stationary curved plate to catching a falling drop of molten metal between rapidly closing plates. The equipment which has been developed at the Los Alamos Scientific Laboratory for rapidly cooling molten materials uses the latter of these two approaches. The basic design, operation, and initial results of this rapid quenching device are given in this report. APPARATUS The drop smasher, which is now being used to obtain rapidly cooled metal foils, is shown in Fig. 1. Basically the device consists of a smasher plate which is driven by a solenoid into a stationary plate. The solenoid is activated by a drop passing through the photoelectric cell and is powered by discharging an adjustable 350-v capacitor bank with a 66-amp peak current into it. This power supply is designed so that the solenoid is powered for 2 m-sec after plate closure to minimize the rebound effect. There is an adjustable time-delay mechanism between the photoelectric cell and the solenoid. Both smasher plates have changeable inserts so that a variety of materials can be used to smash the molten drop. The shaft of the moving plate is guided in an adjustable housing which has ball-bearing walls. The cabinet shown to the left of the drop smasher in Fig. 1 contains the power supply and receiver for the photoelectric cell, the time delay mechanism, and the capacitor bank. The drop smasher can be placed inside a vacuum chamber, for use with radioactive materials, with the upper plate forming the lid, as shown in Fig. 2. On top of the vacuum lid is an induction coil, powered by an Ajax induction generator, which is used to melt drops from the end of the rod extending through the vacuum seal on top the quartz tube. OPERATION The drop smasher shown in Fig. 2 is operated in the following manner. The smasher plates are separated and the unit is lowered into the vacuum chamber using a pressurized cylinder. The induction coil, quartz tube, and lid with sliding vacuum seal are then assembled on top the vacuum chamber. A rod of the material for rapid quenching studies is connected to the rod extending through the sliding vacuum seal. The vacuum chamber is then evacuated and the desired atmosphere established. The photoelectric cell is turned on, and the capacitor bank is charged and armed. Power is supplied to the induction coil, and the rod of material for rapid quenching studies is lowered into the induction field. A molten drop forms on the end of the rod, drops off, falls through the light beam of the photoelectric cell, and is then caught between the smasher plates. .
Jan 1, 1970
-
Part I – January 1969 - Papers - Kinetics of Oxygen Evolution at a Platinum Anode in Lithium Silicate MeltsBy A. Ghosh, T. B. King
The kinetics of the discharge reaction: 20'- (in silicate melt) = O,(g) + 4e- at a platinum anode in lithium silicate melts have been studied al 1350°C by galvanostatic methods. Plots of the sleady-state overpotential, q, as a function of the logarithm of the current density, i, showed injlections and were linear only at high current densities. The value of the overpotential was influenced by bubbling gas through the electrolyte. The ocer potential was also studied as a function of time. The rise and decay of overpotential were very slow processes. At low current densities transport is the likely rate-controlling process but at high current densities passivation of the electrode, Presumably by an oxide film on the surface, seems to be a contributory functor. IT is well-established that molten silicates behave as electrolytes'5 and, except in a few cases,6 conduction is entirely ionic. Moreover, it is supposed that a possible, and perhaps predominant, mechanism for phase boundary reactions between metals and slags is similar to that in corrosion whereby anodic and cathodic processes occur at unrelated sites, the metal serving to conduct electrons.1'8 Thus electrochemical studies of some slag-metal reactions would seem to be a useful way to diagnose the rate-controlling steps in the overall reaction. The electrochemical method is, in principle, a better diagnostic tool than the direct chemical method for the following reasons: 1) The partial electrochemical reactions, which are simpler than the overall reaction, may be studied individually. 2) The rate of reaction can be controlled at will and independently of the concentrations of reactants. 3) Fast reactions can be studied by relaxation methods.' Esin and his coworkers5'10"12 have pioneered such studies in silicates and have deviloped some ingenious techniques. Not all of their findings, however. can be accepted without a good deal of further work. In this investigation, the kinetics of the oxygen discharge reaction: 202- (in silicate melt) = Oz(g) + 4e- [I] at a platinum electrode were studied by both steady-state and transient galvanostatic techniques. Interest in this reaction was first developed as a result of the findings of Fulton and chipman13 that the reduction of silica, in a silicate slag, by carbon, dissolved in liquid iron, is a very slow reaction. Subsequent work, for example, by Rawling and ~lliott,'~ has demonstrated that the reaction under these conditions must be slow, because the rate is limited by diffusion of oxygen in the iron to the metal-crucible phase boundary at which a CO bubble may be nucleated. Further work by Tarassof,'~ in which the reduction of silica by aluminum dissolved in copper was studied, has shown that under these conditions, where carbon monoxide evolution is not involved, control of the reaction rate resides in diffusion of silica in the slag phase. However, there is no practical way of inducing sufficient convection in the system to make it clear that the phase boundary reaction is indeed fast. The overall reaction of silica reduction involves the discharge of silicon ions at cathodic sites and oxygen ions at anodic sites. In the examples cited, the discharged ions are dissolved in a liquid metal. In the present study of oxygen ion discharge, gaseous oxygen may be evolved at high current densities or oxygen may simply dissolve, possibly as oxygen molecules, in the silicate at very low current densities. The discharge of an oxygen ion at an anode must, in silicates less basic than the orthosilicate composition, be preceded by a reaction in the vicinity of the electrode, such as: which makes oxygen ions available. Platinum was chosen as the working electrode since it is comparatively inert to oxygen and is, therefore, expected to come rapidly into equilibrium with the electrolyte and with gaseous oxygen. Minenko, Petrov, and Ivanova16 have measured the electromotive force at a platinum electrode in molten silicates as a function of the partial pressure of oxygen in the atmosphere, the concentration of oxide ions in the melt, and the temperature. They found platinum to behave as a reversible oxygen electrode. At two different oxygen pressures, Po2 (I) and Pq (11). the electromotive force is given by: where F is the Faraday constant, equal to 23,060 cal per v equivalent, indicating that the electrode reaction is as written in Eq. [I.]. Platinum has been similarly used in molten silicates by other inve~ti~ators. "'~~ In this investigation platinum was used only as an anode, since a current deposits other elements on its surface and changes its characteristics.
Jan 1, 1970
-
PART III - CryoelectronicBy Hollis L. Caswell
The present status of integrated circuits utilizing. superconductive switching. elements is reviewed with special attention given to fabrication techniques, methods for interconnecting completed circuits, and refrigeration requirements. Cryoelectronics has been largely an "inte- grated-circuit" technology since its conception because the switching speed of superconductive devices is attractive only when these devices are fabricated with thin-film techniques. It is true that cryotron circuits can be constructed from wires of appropriate materials (as indeed was done by Dudley Buck 1 in his early investigations) but these circuits will switch in times characteristic of milliseconds whereas similar circuits fabricated by thin-film methods have potential switching times of nanoseconds. Furthermore, cryo-electronic devices such as the cryotron lend themselves readily to fabrication by thin-film techniques since these components may be made from polycrys-talline thin films and are relatively insensitive to the presence of impurities (as measured by semiconductor standards). Therefore, during the past decade considerable effort has been devoted to developing techniques for batch fabricating circuit arrays containing superconductive switching elements. Technology had developed to the point several years ago that fabrication of cryoelectronic arrays containing up to one hundred devices was rather straightforward. However, larger arrays containing between lo4 and 106 components which are required for commercial development of cryoelectronics still pose very severe yield problems. Thus in a sense cryoelectronics found itself in 1962 at the point semiconductor technology finds itself today; namely, individual devices and small groups of integrated devices could be fabricated with acceptable yield and the outlook for building larger integrated-circuit arrays was bright. Unfortunately, problems associated largely with yield have made fabrication of these larger arrays difficult. Unlike semiconductor technology, cryoelectronics had to solve the problems of large-scale integration before it could become economically attractive. This has proven to be a sizable burden to bear. Since several reviews exist on superconductivity,2 superconductive devices,3 and cryoelectronic technology, no attempt will be made in this paper to summarize these areas. Instead a few specific topics will be dealt with in more detail. First, a brief description is given of selected superconducting switching and storage devices with special attention to several metallurgical techniques which improve the performance of these devices. Second, techniques used to fabricate cryoelectronic devices are described with emphasis on problems affecting yield. Third, techniques for interconnecting a number of cryoelectronic planes are described. And last, refrigeration of cryoelectronic components is discussed briefly since the low operating temperature of superconductive devices is an important consideration in this technology. SUPERCONDUCTING STORAGE AND SWITCHING DEVICES The basic superconductive switching device is the thin-film cryotron. The geometry of this device is attractively simple, since it involves only the intersection of two lines that are electrically insulated from each other. The switching element (gate) and control element (control) of a crossed-film cryotron are arranged as illustrated in Fig. 1. The material for the gate is selected to permit the gate to be switched from the superconducting to the normal (resistive) state by the application of a control current. Tin, which has a critical temperature (T,) of 3.7°K, is commonly used for the gate and the cryotron is operated at a temperature just below T, (for example, 3.5°K). The control material (normally lead, with T, = 7.2°K) is chosen so that the control is never driven normal during circuit operation. To improve cryotron operation, a ground plane, also of lead, is placed under all of the circuitry to act as a diamagnetic shield and improve the current-density uniformity across the width of various thin-film elements. Normally, line widths vary from 0.005 to ^ 0.020 in. and film thicknesses from 300 to 10,000A, although new fabrication techniques make narrower lines feasible. In fabricating cryotrons it is important that the edges of the gate elements be geometrically sharp to avoid undesirable switching characteristics associated with a thinner edge region, Fig. 2. One technique which has been used extensively to form patterns consists of placing a physical mask containing the film pattern between the evaporation source and the substrate and depositing through the mask. Film strips formed in this manner possess a penumbra at the film edges due to shadowing of the evapor-ant under the mask. Several techniques have been proposed for minimizing effects due to this penumbra. One of the more promising metallurgical techniques
Jan 1, 1967
-
Reservoir Engineering-Laboratory Research - The Alcohol Slug Process for Increasing Oil RecoveryBy R. L. Slobod, C. Gatlin
This .study defines the basic mechanism of the mis-cible displacerrzent of oil and writer from porous Medici by various water-driven alcohol .slugs. Three distinct alcohol slug processes were .studied. Considerable data concerning the quanity of alcohol required for oil recovery were also obtained. All data were obtained in a 1-in. diameter, 100-ft long, unconsolidated core. The porosity of this system was 35 per cent. and the permeability was approxitrlately 4 darcies. Total core pore volume wax 5,716 cc. All displacements were conducted at a constant injection rate of 5 to 6 cc//min, which corresponds to a frontal advance of 5 ro 6 ft/hr. The first portion of this paper is concerned with 111 use of one alcohol—isopropyl—a\. the slug material. Isopropyl alcohol (IPA) is completely miscible with both oil rind water; however, miscibility of the threc-component systems, oil-water-IPA, requires a relutively. high concentration of IPA. Hence, the displocement is not of the misoible type unless the IPA concentration is maintained above some critical value. A slug of IPA equal to only 13.5 per cent of the pore volume was jound to be sufficient to obtain complete recover?, of residua1 naphtha. In later studies two distinct process variations were developed. The firsr of these utilized methyl alcohol (MA) and IPA as slug materials. It was shown that methyl alcohol may he substituted for IPA at the front and rear of the slug with no loss of oil recovery. A .slug of 4 per cent MA-4 per cent IPA-4 per cent MA was sufficient for complere oil recovery. Because MA is considerably cheaper than IPA, this represents an important step toward economic application. A .second process variation used normal butyl alcohol (nBA) and MA as the composite slug, the nBA segnzent being injected first. This technique requires the smallest total slug size (approximately 10 per cent) of all processes studied. The high cost of nBA, however, precludes commercial application. It is possible that this basic process, subject to changes of alcohol type, may lead to a commrercicil process. INTRODUCTION Within the last 10 years, considerable study has been devoted to the general mechanics of miscible phase dis- placements in porous media. These studies have. in general, dealt with two-component displacernent. All reported field trials to date have been modified gas drives of one type or another. These utilize either lean gas at high pressure, rich gas at lower pressure or an LPG slug. The irreducible water content of the reservoir remains immobile during these processes. Although these techniques hold great promise. certain drawbacks do exist. Poor areal sweep efficiency, inherent in any displacement having a highly unfavorable mobility ratio, is one disadvantage. Furthermore, the miscibility of all processes which use gas is dependent on pressure. These pressure limitations may prohibit application to shallow reservoirs. There are also many areas where large quantities of LPG and natural gas are not readily available. These factors emphasize the need for improved techniques. The alcohol slug process is also a miscible-phase displacement process. It differs from the previously mentioned two-component techniques, however, in that both the reservoir oil and water are displaced by the slug. This behavior is a consequence of the miscibility of certain oil-water-alcohol systems. In the simplest case, a relatively small volume (slug) of an alcohol (such as isopropyl) is injected into the system. Water is then used to drive the slug through the medium. Thus. three components—-alcohol. oil and connate water-—exist at the front side of the slug. Miscibility is obtained at some alcohol concentration, this being dependent on the solubility of the particular system. Miscibility is maintained within the slug until the alcohol concentration falls below that value required to maintain miscibility. When miscibility is lost, the process reverts to a water flood. Certain inherent benefits of this technique are apparent. High pressures are not necessary to obtain miscibility in these liquid systems. Furthermore, water is a more desirable driving agent than gas because of the improved mobility ratio and the attendant improvement in areal sweep efficiency. The main disadvantage is, of course, the relatively high cost of alcohols. The use of an alcohol slug to recover reservoir oil is not a new idea." ' However, this process has received only limited study, presumably due to the lack of industrial interest caused by the seemingly prohibitive cost of alcohols. The main purpose of this study was to investigate the mechanism of the displacement of oil and water by various alcohol slugs and to develop process modifications aimed toward possible commercial application.
-
Metal Mining - Research on the Cutting Action of the Diamond Drill BitBy E. P. Pfleider, Rolland L. Blake
IT is generally believed that the amount of diamond drilling will increase appreciably in the next decade, as the seaarch for minerals throughout the world becomes more difficult and intense. An attendant problem may be one of short diamond supply, resulting in higher bit and drilling cost. With this background, the U. S. Bureau of Mines' and the School of Mines at the University of Minnesota' have established comprehensive research programs in diamond drilling. One of the several aims is the design of a more efficient bit, which would lower diamond consumption and increase rate of advance, both essential in reducing drilling costs. The objective of the specific research problem" discussed in this paper was an investigation of the cutting action of the cliamonds set in a diamond drill bit, cutting action meaning the manner in which the diamonds cut or. loosen the minerals in the rocks being drilled. In the literature on cutting action such descriptive terms are used .as: grinding, wearing, cutting, breaking, shearing, scraping, melting, and chipping. These actions were seldom described or defined. Grodzinski describes the cutting action of a single diamond in the shaping of certain types of material as "breaking out chips of the material." Brittle mate-. rials break as small separate chips, and tough materials, because of heat generated, give a continuous chip. Deeby said about diamond drills: "When diamonds are forced into the formation and rotated, they either break the bond holding the rock particles together, or they cause conchoidal fracture of the rock itself. The former action occurs when drilling in sandstones, siltstones, shales, etc. and the latter action when drilling in chert, flint, or quartz." He said that diamonds cut on the "grinding principle" but he does not define or elaborate on this action. The cutting action of diamonds on glass was first investigated about 1816 by Dr. W. H. Wol-laston, an English physicist. The best glass-cutting diamonds have a natural or artificially rounded cutting edge. This edge first indents the glass and then slightly separates the particles, forming a shallow and nearly invisible fissure. Since none of the material is removed, this action is one of splitting rather than cutting. No other reports of research work on the cutting action of the diamond were found, and further work was considered justified and advisable. It is impractical, even if possible, to observe directly the cutting action of a diamond drill bit in rock; therefore it was necessary to devise an indirect method. It was believed that a study of the following three observations would lead to a better understanding of the cutting action: 1—the appearance of the minerals or rock surface in the bottom of the hole, 2—the size, shape, and other characteristics of the drill cuttings, and 3—the condition of the diamonds in the bit. The cutting action in a particular rock probably varies with bit pressure and speed. If the bit were slowly lifted off the rock, the effect of decreasing pressure might obliterate those bottom hole characteristics that are specific at the test pressure. Likewise, if the drill were stopped with the bit still in contact with the bottom of the hole, then decreasing speed effects would tend to obliterate the characteristics at the set test conditions. Therefore, in order to preserve those cutting effects impressed on the rock at test conditions, it seemed necessary to lift the bit off the bottom of the hole almost instantaneously once drilling conditions, i.e., revolutions per minute, pressure, and water flow became constant. In addition to observing the cuttings, the bit, and the bottom of hole, it seemed desirable to collect some quantitative data for purposes of correlation with the observations and for a record of bit performance. Consequently such data as revolutions per minute, force applied, and rate of advance of the bit were recorded. Six rock types, listed in Table I, were chosen for the tests. It was felt that these rocks had most of the variable characteristics of texture, bonding, and mineral hardness met in the common rocks generally being drilled. The sandstone was so poorly cemented as to be friable, even though most of the cement was silica. The limestone, though well cemented, was quite porous. Originally it was planned to conduct the tesk work with a full-scale drill unit, using EX bits, 7/8-in. core, 11/4-in. OD. The drill worked well, but was too cumbersome for rapid, accurate drilling of many short holes (1 ½-in.) in varied rock types. A new
Jan 1, 1954
-
Drilling – Equipment, Methods and Materials - A Laboratory Study of Rock Breakage by Rotary Drill...By B. E. Eakin, R. T. Ellington
An apparatus and a procedure for determining the viscosity behavior of hydrocarbons at pressures up to 10,000 psia and temperatures between 77 and 400° F are described. The equipment is suitable for measuring viscosity of either the liquid or vapor phases or the fluid above the two-phase envelope for systems exhihiting retrograde phenomena, according no the phase state of the system within these ranges of temperature and pressure. Equations are developed for calculation of viscosity from the experimental measurements, and new data for the viscosities of ethane and propane at 77° F are reported. INTRODUCTION With the advent of higher pressures and temperatures in industrial processes and deep petroleum and natural gas reservoirs, demand has increased for accurate values of physical properties of hydrocarbons under these conditions. Proportionately, more frequent occurrence of natural gas and condensate-type fluids is encountered as fluid hydrocarbons are discovered at greater depths. This increases the importance, to the reservoir engineer, of being able to predict accurately the physical properties of light hydrocarbon systems in the dense-gas and light-liquid phase states. Reliable gas viscosity data are limited primarily to measurements made on pure components near ambient temperature and at low pressures. Few investigations have been reported for high pressures, and except for methane, data on light hydrocarbons are subject to question. This is demonstrated by the large discrepancy between sets of data on the same component reported by different investigators. For mixtures in the dense gas and light liquid regions and for fluids exhibiting retrograde behavior there are very few published experimental data. Viscosity data for methane have been reported by Bicher and Katz,1 Sage and Lacey,12 Comings, et al,3 Golubev,3 and Carr,3 with good agreement among the last three sets of data. Comings, Golubev and Carr utilized capillary tube instruments for which the theory of fluid flow is well established. The theory permits calculation of the viscosity directly from the experi- mental data and dimensions of the instrument alone. Sage and Lacey, and Bicher and Katz used rolling-ball viscometers. The theory of the rolling-ball viscometer has not been completely established, and these instruments presently require calibration by use of fluids of known viscosity behavior before viscosities of test fluids can be measured. To obtain accurate data it is necessary that the rolling-ball viscometers be calibrated by use of fluids of density and viscosity similar to the test fluids, a difficult selection for the gas phase. From the methane data and experimental tests on various natural gases, Carr developed a correlation for predicting the PVT behavior of light natural gases.2,3,4 This correlation was based on data for a very limited composition range; its application to rich gases and condensate fluids is questionable. The object of this investigation is to develop an instrument which can be used to obtain viscosity data at reservoir temperatures and pressures, for rich gases, condensate-type systems above the two-phase envelope and light liquid mixtures. These data will be used in an effort to develop correlations to represent the viscosity behavior of these fluids. APPARATUS In a previous viscosity study Carr2 utilized a modified Rankine capillary viscometer configuration," Fig. 1. In this instrument the gas to be tested is forced through the capillary tube in laminar flow by motion of a mercury pellet in the fall tube, the measured displacement time being that required for the mercury slug to move between the brass timer rings. The viscometer is constructed of glass and mounted in a steel pressure vessel. The test gas pressure in the viscometer is balanced by an inert gas (usually nitrogen) in the vessel. Excellent results have been obtained with instruments of this type, with Carr2 and Comings5 reporting repro-ducibilities of 99.5 to 99.3 per cent and an estimated absolute accuracy of 99 per cent. However, these instruments have limitations which have precluded their use for liquids. The need for maintaining a balance between pressures of the test fluid and inert gas in the viscometer vessel presents operating problems, and requires charging the test fluid to the viscometer very slowly. The principle drawback to the Rankine unit is behavior of the mercury slug which provides the pressure differential across the capillary. When even trace quantities of propane or heavier hydrocarbons are present in the test gas, the mercury tends to subdivide
-
Industrial Minerals - Measurement of Cement Kiln Shell Temperatures (Mining Engineering, Feb 1960, pg 164)By R. E. Boehler, N. C. Ludwig
At Buffington Station, Gary, Ind., Universal Atlas Cement operates fourteen 8 x 101/2 x 155-ft cement kilns in mill 6 and two 11 x 360-ft kilns in the Harbor plant. The No. 11 and 12 kilns in mill 6 are equipped with Manitowac recuperator sections. This report describes studies in measuring exterior shell temperatures on several of these kilns and the development of a traveling radiation pyrometer with certain novel features. Preliminary Work: At first various temperature-sensing devices were placed on the steel shell: 1) crayons with calibrated melting points, 2) colored paints with temperature-calibrated pigments, 3) aluminum paints with temperature-calibrated binders, and 4) metal-stem dial thermometers. The colored paints and aluminum paints failed to indicate the temperatures correctly. The crayons and thermometers did indicate fairly correct temperatures, but it proved impossible to apply enough of these on the shell to detect all the potential areas where hot spots might develop. Furthermore, considerable labor was required to apply these sensors and read the temperatures. Consequently no further work was done with these devices. Formation of Hot Spots: In the burning or clinker-ing zone of a cement kiln, the thickness of the protective coating and thickness of the brick govern the amount of heat transmitted to the steel kiln shell. Usually the protective coating consists of 4 to 8 in. of fused cement clinker. The formation of a hot spot is usually caused by loss of coating? that is, localized areas of the coating become thin or fall away from the refractory. This is generally caused by excessive temperature in the burning zone over a fairly long period of time. It may also be caused by a sudden thermal change in the burning zone. Variations in raw feed composition and in feed rate require changes in the fuel and air rates, and when these are not appropriately altered, conditions may develop in the kiln that will result in loss of coating. Luminescence on the kiln shell indicates that a hot spot has developed to a point that usually alters the refractory's thermal conductivity properties. When this thermal weakness zone occurs in the burning zone of the kiln, constant vigilance is required to protect it by maintaining proper coating. Even so, it has been the writers' experience that within a period of several days to about four weeks the hot spot usually recurs with greater severity. This necessitates shutting down the kiln and re-bricking the affected area. One of the prerequisites of a good burnerman is the ability to maintain a protective coating despite the many variables in operation. When he knows that it is getting thin or that an area has dropped off, he reduces the firing rate and kiln speed and brings feed into the affected area in an effort to rebuild the coating. But when powdered fuel is burned, the atmosphere of the kiln may prevent the burnerman's observing the condition of the coating closely at all times without taking off the fire. It is not considered good practice to do this frequently, as it imposes a thermal shock on the coating and upsets operation of the kiln. To help the burnerman scan the shell of the kiln along the burning zone, therefore, a radiation pyrometer, connected to a potentiometric recorder, was mounted on a slowly moving steel cable. The theory of operation, construction details, and adaptability of the radiation pyrometer are included in an excellent monograph' and also in a textbook.' Shell temperatures of the Atlas Cement kilns were measured with a Brown Instruments Div. low intermediate range Radiamatic unit, of range 200" to 1200°F, and a circular chart Electronik potentio-metric recorder, of range 500" to 1000°F. In Bulletin 59095M the supplier publishes standard calibration data (millivolts vs degrees Fahrenheit) for this radiation pyrometer, These data, however, apply only to flat surfaces having emissivities of unity. Calibration of Radiation Pyrometer for Use on Curved Surfaces: When applied to surface temperature measurements, the radiation pyrometer reading depends on the nature of the surface, the material of which it is composed, and also to some extent on the temperature of the surroundings. Although the present radiation pyrometer is designed to give a calibrated response under ideal (black body) conditions when used commercially, it must be calibrated empirically. The calibration procedure, given below, follows that described by Dike (Ref. 1, pp. 38-39). Calibration tests on plane and curved surfaces showed that the response of the radiation pyrometer was very
Jan 1, 1961
-
Part X - The Influence of Additive Elements on the Activity Coefficient of Sulfur in Liquid Lead at 600°CBy A. H. Larson, L. G. Twidwell
The influence which Au, Ag, Sb, Bi, Sn, and Cu have, both individually and collectively, on the activity coefficient of sulfur in liquid lead at 600"C zuas studied by circulating a H2S-Hz gas wlixture over a specific lead alloy until equilibrium was attained. Subsequently, the H2S concentration in the equilibrium gas mixture and sulfur concentration in the condensed phase were deterruined. The elements gold, silver, and antinzony (above 8 at. pct) increased the activity coefficient of sulfur. Bismuth had no apparent effect. Tin (above 3 at. pct) and copper decreased the coefficient. The influence of an individual element, i, on sulfur is best reported as the interaction parameter, riS, which is defined as The values o these first-order interaction zus are: ESzu = —55.0. These interaction parameters are used to predict the activity coefficient of sulfur in six fouv-component alloys and one seven-component alloy. Comparisons are made with direct experimental determinations. INTERACTIONS in dilute solution have been studied by many investigators. Most of the experimental work has been confined to solute-solvent interactions in simple binary systems and solute-solute interactions in ternary systems. Dealy and pehlke"~ have summarized the available literature on activity coefficients at infinite dilution in nonferrous binary alloys and have calculated from published data the values for interaction parameters in dilute nonferrous alloys. Interaction parameters are a convenient means of summarizing the effect of one solute species on another in a given solvent. Only a few investigators have studied interactions of the nonmetallic element sulfur in a metallic solvent. They are as follows: Rosenqvist,~ sulfur in silver; Rosenqvist and Cox,4 sulfur in steel; chipman, sulfur in alloy steels; Alcock and Richardson,% ulfur in copper alloys; Cheng and Alcock,' sulfur in iron, cobalt, and nickel; Cheng and ~lcock,' sulfur in lead and tin. The only reported work on the Pb-S system in the dilute-solution region is that of Cheng and Alcock.' Their investigation involved a study of the solubility of sulfur in liquid lead over the temperature range 500" to 680°C. The results may be summarized by the following relationship: S (dissolved in lead) + Pb(1) = PbS(s) log at. %S = -3388/T + 3.511 Experimentally, it was found that Henry's law was valid up to the solubility limit of sulfur in lead, i.e., at 600°C up to 0.43 pct. Their investigation did not include the study of sulfur in lead alloys. More accurate calculations could be made in smelting and refining systems if activity coefficients of solute species could be accurately predicted in complex solutions. One of the objectives of this study was to compare the experimental data with the values calculated from the equations derived from models for dilute solutions proposed by wagner9 and Alcock and Richardson. A temperature of 600°C was chosen as the experimental temperature to attain reasonable reaction rates and to minimize volatilization of the condensed phase. EXPERIMENTAL Materials. The Pb, Au, Ag, Sb, Bi, Sn, and Cu used for preparation of the alloys were American Smelting and Refining Co. research-grade materials. All were 99.999+ pct purity except the antimony and tin which were 99.99+ pct. The initial alloys prepared for this study consisted of twenty-one binary alloys, eleven ternary alloys, and one six-component alloy. The constituent elements were mixed for each desired alloy and were placed in a crucible machined from spectrographically pure graphite. The crucible was placed in a vycor tube which was evacuated with a vacuum pump and gettered by titanium sponge at 800°C for 8 to 12 hr. After the gettering was completed, the chamber containing the titanium was sealed and removed. The remaining sample chamber was placed in a tube furnace at 800°C for 2 hr and quenched in cold water. The final operation consisted of homogenization of the alloy for 1 to 2 weeks at a temperature just below the solidus for the individual system. The resulting master alloys were sectioned into small pieces and a random choice made for individual equilibrations. Cobalt sulfide (Cogs8) used to control the gas atmosphere in the circulation system was prepared by passing dried HzS for 24 hr over a Co-S mixture heated to 700°C in a tube furnace. This material was then mixed with cobalt metal to give a two-phase mixture which, when heated in hydrogen to a particular temperature, produced a desired H2S/H2 gas atmosphere in the circulation system. A Cu2S-Cu mixture also used in this study was prepared in a comparable manner. Apparatus for Equilibrium Measurements. The experimental technique of this study required apparatus
Jan 1, 1967
-
Institute of Metals Division - 475°C (885°F) Embrittlement in Stainless SteelsBy A. J. Lena, M. F. Hawkes
Changes in hardness, tensile properties, microstructure, electrical resistance, and X-ray diffraction effects indicate that lattice strains are necessary for the embrittlement of ferritic stainless steels when heated for relatively short times at 475°C (885°F). It is suggested that 475°C (885°F) embrittlement is due to the accelerated formation of an intermediate stage in the formation of s under the influence of these strains. FERRITIC stainless steels (low carbon alloys of iron with more than 15 pct Cr) are subject to two forms of embrittlement when heated in the temperature range of 375° to 750°C. The embrittlement which occurs after long time heating between 565" and 750°C is well understood; it is caused by the precipitation of the hard, brittle s phase. Sigma is an intermetallic compound of approximate equi-atomic composition with an extended range of formation in Fe-Cr alloys. The maximum temperature at which this form of embrittlement can occur is dependent upon chromium content; and is approximately 620°C for a 17 pct Cr steel and 730°C for a 27 pct Cr steel. The other form of embrittlement occurs after relatively short heating periods in the range of 375" to 565°C; in the higher chromium steels, hours may be sufficient as compared to months for s embrittlement. This phenomenon is not at all well understood and several controversial theories have been proposed. The rate and intensity of embrittlement increase with increasing chromium content but the maximum rate occurs at 475°C re-gardless of chromium content. As a result of this, the phenomenon has been termed 475°C (885°F) embrittlement. The effect of 475°C embrittlement on the properties of ferritic stainless steels has been thoroughly reviewed by Heger.1 The embrittlement causes a pronounced decrease in room temperature impact strength and ductility, a large increase in hardness and tensile strength, and a decrease in electrical resistivity and corrosion resistance. Microstructural changes accompanying embrittlement are minor and difficult to interpret with a general grain darkening, appearance of a lamellar precipitate, grain boundary widening, and precipitation along ferrite veins having been reported at various times. With the exception of reported line broadening, X-ray diffraction studies by conventional Debye analysis of solid samples have been of little value. BY making use of electron diffraction methods, Fisher, Dulis, and Car-roll' have recently shown the existence of a chromi-um-rich, body-centered cubic phase in 27 pct Cr steels which had been aged at 482°C (900°F) for as long as four years. Two types of theories have been advanced to account for the embrittlement. The first of these requires the precipitation of a phase not inherent in the Fe-Cr system with various investigators suggesting a carbide,3 nitride,3 phosphide,4 or oxide." Theories of this type have difficulty accounting for the influence of alloying elements on the embrittlement and for the facts that a minimum chromium content is necessary for embrittlement and the intensity of embrittlement increases with increasing chromium content. The second type of theory that has been proposed relates 475°C embrittlement to s phase formation which is inherent in the Fe-Cr system. An assumption of this kind can adequately explain the influence of alloying elements, for they exert an effect on 475°C embrittlement similar to that on s phase for-mation as can be seen in Table I. The minimum chromium content is essentially the same for both phenomena and it has been shown12,13 that s is a stable phase in the embrittling temperature range. In addition, it has been reported14,15 that pure alloys embrittle to the same extent as commercial type alloys. There are, however, several factors which have prevented complete acceptance of a s phase theory. Foremost of these is that the embrittlement can be removed by reheating for short time periods above 600°C, which in the higher chromium steels is within the stable s region. No s has ever been observed after one of these curing treatments, nor has any s been found as a result of embrittlement at 475°C. In addition, the simple precipitation of s cannot explain the time-temperature relationships for reactions between 350°and 750°C. This behavior is shown schematically in Fig. 1. Newell 16 and Ried-rich and Loib4 have shown that 475°C embrittlement follows a C-type curve as illustrated, while Short-
Jan 1, 1955
-
Institute of Metals Division - The Tensile Fracture of Ductile MetalsBy H. C. Rogers
A phenomenological study of the failure of polycry stalline ductile metals at room temperature was carried out using light and electron microscopy. Tensile fractures as well as sections of partially fractured bars of OFHC copper in particular were examined. The initiation and growth of the central crack in the neck of a tensile specimen occurs by void formation. After the formation of the central crack the f'racture may be completed in either of two ways: by further void formation or by an "allernating slip" mechanism. The first leads to a "cup-cone" failure; the second, to a "double-cup" failure. In the past decade or decade and a half there has been a great deal of emphasis on the solution of the problem of the brittle fracture of metals, particularly those which normally exhibit considerable ductility such as steel. Since the problem of the fracture of metals after large plastic strains has less immediate commercial or defense significance, there has been considerably less effort expended in describing the details of the phenomenology and determining the mechanism of this type of fracture. The present research was undertaken to increase our knowledge in this area. The problem of ductile fracture has not been neglected completely, however. Ludwik1 first found by sectioning a necked but unbroken tensile specimen of aluminum that fracture began with a large internal crack which appeared to have started in the center of the neck. Examination of the fracture indicated that the crack had propagated radially with increasing deformation until a point was reached at which the path of the fracture suddenly left this transverse plane and proceeded at approximately 45 deg to the stress axis until the surface was reached. This gives rise to the commonly observed cup-cone tensile fracture. When MacGregor2 was attempting to demonstrate the linearity of the true stress-true strain curve from necking until fracture, he found that copper was anomalous in that the stress dropped off markedly from the straight line value before fracture occurred. Radiography indicated that in the copper an internal crack was formed long before the final fracture, the stress decreasing during the growth of this crack. One of the most significant advances in the understanding of ductile fracture was the result of work by Parker, Flanigan, and Davis.3 By the use of etch-pit orientations they were able to demonstrate conclusively that the fracture surface at the bottom of the cup, although on a gross scale normal to the tensile axis, did not consist of cleavage facets as had been previously supposed by many investigators. Recently, Forscher4 has shown evidence of porosity near the tensile fracture of hydrogenated zirconium which he attributes to hydride decomposition. The workers at the Titanium Metallurgical Laboratory5 have also shown evidence of porosity in a number of the commonly used metals after heavy deformation. Many metals have relatively low ductility during creep tests at high temperature. The fractures are intercrystalline, resulting from the nucleation and growth of grain boundary voids. The work in this area has been recently reviewed by Davies and Dennison.6 It is possible that some of the observations and conclusions may have a bearing on the present study? especially since at least two studies7,' have been extended down to room temperature and below using magnesium alloys. However, since magnesium does exhibit low-temperature cleavage, these results may not be pertinent to the present one. The use of the electron microscope as an aid to the study of fractures has been extensively exploited by Crussard and coworkers.9 The examination of direct carbon replicas of the fractures of a large number of metals and alloys showed that the bulk of the fracture surface was covered with cup-like indentations of the order of 1 to 2 µ in size. These frequently had a directionality by which Crussard claims to be able to tell the direction of the crack propagation. With this rather disconnected background of information, this investigation was undertaken in the hope of presenting a unified picture of the initiation and propagation of a fracture in a ductile metal. To this end all of the techniques previously used were employed simultaneously so that there might be a good correlation of the data obtained by different techniques. EXPERIMENTAL PROCEDURE The metal which was chosen as the starting material for this investigation was OFHC copper. Of the dozen or so materials considered, it best fulfilled the requirements of commercial availability in large sizes, good ductility, relatively high melting point compared with room temperature and
Jan 1, 1961
-
Reservoir Engineering – General - Reservoir Analysis for Pressure Maintenance Operations Based on Complete Segregation of Mobile FluidsBy John C. Martin
The discovery of a new gas reservoir demands that the planning of a sottnd well-spacing program be initiated early in the development stage. It is the purpose of this discussion to illustrate by actual field examples the application of basic well-spacing principles, previously developed for oil reservoirs, to the problem of well spacing in natural gas fields. These studies are presented for the field use of geologists and engineers who are concerned with the initial planning of the proper development of the newly discovered gas reservoir. INTRODUCTION The phenomenal growth of a vigorous natural gas industry emphasizes the increasing importance of natural gas as a source of energy, fuel, and raw materials to our nation's economy. Since 1945 marketed production of natural gas for the U. S. has increased 21/2 times to a record high of 10.6 trillion cu ft during 1957. As major participants in the gas industry, we share an added interest to develop and produce our natural gas reserves with constantly improved efficiency. The subject of well spacing is vitally important to the gas industry, for the well itself plays a significant role in the development of the natural gas reservoir and in control of the recovery process. Maximum utilization of wells is an integral part of sound conservation practices. The discovery of a new gas reservoir demands that careful choice of well location and well spacing be made and that the planning of a sound well spacing program be initiated early an the development stage. With the drilling of the first development well, efforts of the geologist and engineer must be directed toward acquisition of adequate technical evidence upon which a firm recommendation for a spacing program may be based. With this technical appraisal as a foundation, operators and state regulatory agencies jointly can go far in providing a framework for sound development of gas fields to achieve a program of conservation that avoids the unnecessary well. WELL-SPACING CONCEPTS Through laboratory and field investigations of the mechanism of the recovery of oil and gas, of fluid behavior, and of effective control of reservoir and well, a crystallization of ideas regarding reservoir behavior has emerged as a well-developed technology. Associated with a better understanding of the fundamental principles underlying reservoir and well behavior has been the growth of concepts concerning the role of wells and their spacing in the development and operation of an oil or gas reservoir. In addition to serving as outlets for the withdrawal of fluids from the reservoir, wells are recognized as having two other important functions: (1) providing access to the reservoir to obtain information concerning the characteristics of the reservoir and its fluids, and (2) serving as a means by which the natural or induced recovery mechanism may be effectively controlled. Beyond a minimum number of wells required to fulfill these two functions, additional wells will not increase recovery. With particular emphasis upon well spacing in oil reservoirs, many studies of the well spacing-recovery relationship have evolved the concept that the ultimate oil recovery is essentially independent of the well spacing.' These fundamental concepts are no different when regarding the role of wells and their spacing in the natural gas reservoir. They are equally applicable to the consideration of well spacing in gas reservoirs. For the gas reservoir, the problem of well spacing then revolves around the question of drainage and the degree or extent to which a well may drain gas from its surrounding reservoir environment. Theoretical and Experimental Work During the past 30 years, theoretical and experimental work carried on to study the physical principles involved in the flow of fluids through porous media has shed light upon the matter of drainage. Fundamental mathematical equations have been derived to describe the mechanism of flow of oil and gas through porous rocks. With the recent advent of high-speed digital computers, attempts have been made with mounting success to develop solutions, employing numerical techniques, to mathematical expressions that describe more rigorously the physical behavior and mechanism involved in the unsteady-state flow of compressible fluids, such as a gas, through porous rock. In 1953, Bruce, Peace-man, Rachford and Ricez published a stable numerical procedure for solving the equation for production of gas at constant rate. The results of these calculations are significant with respect to this matter of drainage, for they indicated (1) that depletion of the gas reservoir resulted in a drop in pressure at the extremity of the
-
Instrumentation For Mine Safety: Fire And Smoke Problems And SolutionsBy Ralph B. Stevens
INTRODUCTION Underground fires continue to be one of the most serious hazards to life and property in the mining industry. Although underground mines are analogous to high-rise buildings where persons are isolated from immediate escape or rescue, application of technology to locate and control fire hazards while still in their controllable state is slow to be implemented in underground mines. Even in large surface structures such as hotels, often only fire protection systems which meet minimal laws are implemented due to the high cost of adding extensive extinguishing systems, isolation barriers, alternate ventilation, escape routes and alarm systems. Incomplete and ineffective protection occasionally is evidenced where costs would not seem to be a factor, such as the $211 million MGM Grand Hotel fire November 21, 19801. Paramount in increasing fire safety and decreasing the threat of serious fire is early warning followed by proper decision analysis to perform the correct action. However, very complex fire situations can be produced in structures such as high-rise buildings and underground mines simply because of the distances between the numerous fire-potential locations and fire safe areas. Other complexities arise when normal activities occur that emit products of combustion signaling a fire condition to a sensitive fire/smoke sensor. For example, the operation of diesel equipment or the performance of regular blasting can produce combustion products that reach the sensitive alarm points of many sensors2. Smoke detectors for surface installations provide fire warning when occupants are at a distant location or when sleeping, thus greatly reducing injuries and property damage. However, when installed in the harsh environments of underground mines, fire and smoke detection equipment soon becomes inoperative, unreliable, or requires excessive maintenance. The U.S. Bureau of Mines has performed many studies and tests to improve fire and smoke protection for underground mine workers3. This paper describes several USBM safety programs which included in-mine testing with mine fire and smoke sensors, telemetry and instrumentation to develop recommendations for improving mine fire safety. It is hoped that the technology developed during these programs can be added to other programs to provide the mining industry with the necessary fire safety facts. By recognizing fire potentials and being provided with cost-effective, proven components that will perform reliably under the poor environmental conditions of mining, mine operators can provide protection for their working life and property equal to that which they provide for themselves and their families at home. The basis of this report is two USBM programs for fire protection in metal and nonmetal mines4,5 and one coal program6. The data was collected beginning in May 1974 and continuing through the present with underground tests of a South African fire system installed at Magma Mine in Superior, Arizona, and a computer-assisted, experimental system at Peabody Coal Mine in Pawnee, Illinois. The conduct of each program was as follows: • Define the problem and its magnitude in the industry • Develop concepts to solve or diminish the problem • Review available hardware or systems approaches to fit the concepts • Install and demonstrate the performance of a prototype system through fire tests in an operating mine. MINE FIRE FACTS Whether in coal or metal and nonmetal mines, the potential severity of fire hazard is directly related to location. As shown in Figure 1, fire in intake air at zones A, B, C or D can cause contamined air to route throughout the mine quickly if not detected, isolated or rerouted. Causes and location of former metal and nonmetal fires are represented in Table 1; the cause and location of fatalities and injuries is shown in Table 2. Coal-related fires and their impact on deaths and injuries are graphed in Figure 2; their locations are described in Table 37. Significantly the table shows that the hazard to personnel was three times greater for fires occurring in shaft or slope areas, and the percentage of deaths and injuries was four times that of other areas. Number of Persons Affected A 129-mine sample indicated that from 8 to 479 employees per shift work in underground metal and nonmetal mines, and that deeper mines have larger populations, as shown in Figure 3. Coal mining relates similar employment, and a 16-state sample of 670 mines employing at least 25 persons shows the distribution in Figure 4. Drift mines accounted for 58 percent of the sample but employ only 45 percent of the underground workers.
Jan 1, 1982
-
Producing - Equipment, Methods and Materials - Short-Term Well Testing to Determine Wellbore DamageBy L. R. Raymond, J. L. Hudson
This paper proposes a comparatively short-term (8 to 10 hours) well test for detecting and characterizing well-bore damage and for measuring mean formation permeability. The proposed test is made by injecting fluid at constant pressure, recording injection rate as a function of injection time. After one to four hours of injection, the well is shut in and fall-off of bottom-hole pressure is obtained as a function of shut-in time. Formation permeability is estimated by an iterative technique. First, a value of formation permeability is assumed. Then, a plot of the recorded injection rate as a function of dimensionless time is made, using the assumed pertneability value. From the slope of the injection-rate curve. a new value of formation permeability is calculated. If the new value agrees with the original assumed value, the assumption was the correct formation permeability. If the values do not agree, the process is repeated using the new permeability value in the calculation. Convergence is rapid, and a reliable permeability value results. Pressure fall-off data are used to check the result. Graphs of pressure and injection rate us functions of time given in the paper show that changes in permeability of the formation in the neighborhood of the wellbore are disclosed by this technique. Thus, the short-term test can he used to detect formation damage. Also, a rough measure of the radial extent of damage can be inferred, which is helpful in designing stimulation treatments. The mathematical model used for this work was a single-zone, horizontal reservoir with a damaged zone in which permeability decreased continuously as radial distance to the wellbore decreased. This model is more realistic than the usual two-zone, discontinuous permeability model used in published works; calculations indicate the realistic model is valid. Vertical variations in horizontal permeability were studied with this model, and results indicate that the permeability measured by the short-term test is the mean horizontal permeability for the vertical interval tested. The proposed short-term test thus should be useful in detecting and characterizing formation damage and in measuring formation permeability needed in calculating reservoir transmissibility. INTRODUCTION To plan the most efficient production or injection schedule for a well and to design or evaluate the optimal stimu- lation treatment, it is necessary to know the properties of the reservoir adjacent to the well, particularly the reservoir transmissibility and characteristics of a damaged zone, if one exists. Several techniques for determining reservoir transmissi-bility from well tests have been presented in the literature. 1,2,3,4 All these techniques rely on conducting constant-rate well tests that often are difficult to execute. A constant-pressure well test is generally easier to carry Out. and this paper contains the first available method for the analysis of constant-pressure well tests. Determination of wellbore damage from transient well tests has been the subject of several papers."" From these studies it is apparent that information necessary for determination of the characteristics of a damaged zone is available shortly after the transient well test is initiated. Consequently, it may not be necessary to carry out an extensive well test (for example, a pressure build-up test) if the primary purpose of the test is to detect the existence of wellbore damage. All previous studies of well testing to determine wellbore damage have been based on the two-zone perrneability model. In this model the damaged zone has a permeability k,, extending to a radius r,,, and the formation permeability k obtains from r, to the drainage radius r,.. Consequently, there is a discontinuity in permeability at r = r,,. This discontinuity can be eliminated by assuming a continuous variation in permeability through the damaged zone. The effect of this assumption on transient well tests is discussed in following sections of this paper. In addition, all formations have within them vertical permeability variations associated with lithology changes throughout the zone of interest. This paper also considers the effect of these variations on transient well tests. ANALYSIS OF CONSTANT-PRESSURE WELL TESTS The mathematical analysis associated with the injection of fluid at constant wellbore pressure into a single-zone, horizontal reservoir completely filled with a fluid of small and constant compressibility and constant viscosity is given in Appendix A. In this analysis it is assumed that the well is located at the center of an undamaged, circular drainage area. From this analysis, the formation permeability can be obtained as follows. 1, Estimate a value for the formation permeability k. 2. Prepare a plot of injection rate q vs
Jan 1, 1967
-
Institute of Metals Division - The Effect of Ferrite on the Mechanical Properties of a Precipitation-Hardening Stainless SteelBy Vito J. Colangelo
The primary object of this study was to determine the effect of ferrite and its orientation upon the mechanical properties of a precipitation -hardening stainless steel with particular attention to the short-transverse properties. The investigation consisted of Jour major parts : the preliminary investigation of billet properties, the effect of forging reduction and ferrite content upon mechanical properties, the effect of notch orientation upon impact strength, and the relationship of heat composition to ferrite content. Low ductility and impact strength in the short transverse direction were found to he associated with the orientation and shape of- the ferrite plates. It was also determined that impact strength varied with notch orientation. The test values obtained with the notch perpendicular to the plane of the ferrite plate were lower than those obtained in the notch-parallel condition. The over-all investigation showed that high ferrite contents in general had a deleterious effect upon mechanical properties and that the ferrite content could he minimized by exercising rigorous control of the heat composition. A careful balance of elements, nitrogen in particular, must he maintained in order to reduce the formation of ferrite. THE precipitation-hardening stainless steels were developed to fulfill a need for high-strength corrosion-resistant alloys. In the annealed condition they are soft and ductile and possess many of the desirable characteristics of the austenitic stainless steels. In the hardened condition, the alloys exhibit the high strength and hardness of the martensitic stainless steels. The alloy under consideration in this investigation has a nominal composition as follows: C Mn Si Cr Ni Mo N 0.13 0.95 0.25 15.50 4.30 2.75 0.10 The hardening mechanism is identical to that of other hardenable steels in that it depends upon the transformation of austenite to martensite. This alloy because of its annealed structure and its ability to be hardened combines the desirable forming and corrosion properties of the austenitic grades with the high hardness and strength levels attainable with the hardenable grades. The reason for this apparent duplicity of proper- ties can be explained by considering a basic metallurgical difference between the hardenable stainless steels and those of the austenitic group. Both types are austenitic at 1800°F but, while the martensitic grades transform to martensite upon rapid cooling to room temperature, the austenitic grades remain austenitic even when cooled to temperatures below room temperature. The major difference then is in the degree of austenite stability. This stability can quantitatively be described by the Ms temperature. The Ms is defined as that temperature at which austenite begins to transform to martensite. The austenitic grades for example may be cooled to -300°F without producing significant quantities of martensite. The hardenable stainless steels on the other hand have an Ms temperature in the vicinity of 400" to 700°F. In cooling to room temperature, these alloys traverse the entire Ms-Mf range and show almost complete transformation to martensite. The semiaustenitic stainless steel, however, occupies an intermediate position with respect to its austenite stability. The analysis is so balanced that the Ills temperature lies at or slightly above room temperature. The resulting alloy retains much of its austenite at room temperature and yet responds to hardening heat treatments. Achieving this delicate balance of elements is therefore of great importance. Slight imbalances of the equivalent Cr-Ni ratios frequently result in the presence of 6 ferrite. It is the effects of this ferrit with which we are concerned, more specifically the effect of the quantity and ferrite orientation upon mechanical properties, particularly ductility. PROCEDURE A) Preliminary Investigation of Billet and Forging Properties. In order to determine the effect of ferrite on billet properties, billet stock from three heats with various ferrite contents was utilized. Tensile specimens were obtained in the transverse and longitudinal directions from this material and heat-treated as shown in Tables I and 11. Forgings were made from these same heats, the purpose being to determine what effect, if any, the ferrite might have upon the mechanical properties. These forgings were made in such a manner as to elongate the ferrite in the longitudinal and transverse directions. The method of forging was as follows. A section was cut from a 6-in.-sq billet of Heat A and flat-forged to 1-1/2 in. thick. Working was done from one direction only with no edging passes as shown
Jan 1, 1965
-
Coal - Face Ventilation in Development with Continuous MinersBy W. N. Poundstone
The mining and ventilating system used in development work in the Pittsburgh Seam in northern West Virginia is discussed. The seam conditions and the nature of the accompanying methane gas are described. The type of equipment and the mining cycle will be discussed, showing how they are well suited for very gaseous development work. Face ventilation in development work is possibly the fastest growing problem of the industry. The coal mines of the future will be faced with the prospect of mining from under increasing depths of cover. Consequently, larger and larger amounts of methane gas probably will be found. The Pittsburgh Seam, in northern West Virginia, is an example of an area already faced with this problem. At the present time, most of the development work being done in this seam lies beneath 500 to 1200 ft of cover. The Pittsburgh Seam in this area has always been very gassy—even near the outcrop—and the recent development work has been accompanied with extremely large volumes of gas. In many cases, a single development section has liberated in excess of 1,000,000 cfm in 24 hr. This problem of heavy gas liberation was the chief concern, several years ago, when continuous mining equipment was first considered at Christopher Coal Co. All of us were apprehensive about the liberation that would accompany rapid extraction in a single working place. However, the experience during the past few years has shown that this ability to mine only one place at a time, is actually the key to working this type of coal. With all of the mining or advancement concentrated in one place, the ventilation can also be concentrated. By this it is meant that continuous mining permits the active working place to be ventilated with a maximum amount of fresh air, taken directly from the intake source without first passing another working place. Continuous mining (and a good ventilating system) also permits a much greater concentration of attention or vigilance to the actual working place. There are two things that are very important to the mining of coal having high rates of liberation. First, adequate volumes of air are necessary. Second, and perhaps more important, a mining and ventilating system must be used that will provide an uninterrupted flow of air to every portion of the working section. Liberations of this magnitude take only a few seconds of interruption for a dangerous accumulation of gas to occur. With adequate volumes of air available, the ability to concentrate ventilation more than offsets the concentration of gas emission that is inherent with continuous mining. The mining system used with continuous mining equipment at Humphrey Mine is similar to the system used at many mines in the area for development work. This system is designed to favor ventilation, realizing that other efficiencies are meaningless if the equipment must stop because of ventilation difficulties. This plan is especially well suited to minimizing ventilation interruptions. Basically, the overall plan of mining is to develop headings into virgin coal and encircle or block out large areas. These blocks are generally at least 2000 ft sq. The purpose of this blocking out is to bleed gas from the area before pillaring. Experience has shown this method to be quite effective, even in the most gassy areas. The gas in this field seems to migrate or flow readily from the solid coal into the outside return headings of the development work. The numerous clay veins and slips that are found in the area are extremely good avenues for gas flow. A block of coal, surrounded with development headings, usually bleeds off readily; and since it is cut off from the virgin coal, it is not subject to gas migration, through the seam, from this source. However, the outside return of the encircling development work, adjacent to the virgin coal, may liberate gas for years. This liberation from the outside ribs in the virgin coal is the reason split ventilation is used in development work. If split ventilation were not used, there would, in many cases, be a serious build-up of gas in the intake before it could reach the working face. Fig. 1 shows a typical development section having seven headings. The two outside places on each side are returns, and the three center headings serve as intakes. This section is equipped with a ripper-type continuous mining machine. An off-track loading machine is used to load from a surge pile on the
Jan 1, 1961
-
Part XI – November 1968 - Papers - The Effect of Dispersed Hard Particles on the High-Strain Fatigue Behavior of Nickel at Room TemperatureBy G. R. Leverant, C. P. Sullivan
To evaluate the effect of a dispersion of nondeform-able, incoherent, second-phase particles on high-strain cyclic deformation and fracture, recrystallized TD-nickel (Ni-2ThO2) and a commercially pure nickel, Ni-200, were fatigued under strain control at total strain ranges varying from 0.009 to 0.036. Relative to the Ni-200, the slip at the surface of the TD-nickel was more wavy and discontinuous due to the presence of the thoria particles. This made crevice formation (incipient cracking) within slip bands more difficult in TD-nickel than in Ni-200. Both materials cyclically hardened to a constant (saturation) flow stress which increased with increasing plastic strain amplitude. Cellular substructures were developed in both materials during cycling. The cell size in TD-nickel was controlled by the thoria particle distribution and was independent of plastic strain amplitude over the range investigated. The cell size in Ni-ZOO was larger than that in TD-nickel at similar plastic strain amplitudes and was a function of plastic strain amplitude. These results, together with the cyclic stress-strain curves for both materials, are discussed in terms of a model for fatigue strain accommodation at saturation recently proposed by Feltner and Laird. NUMEROUS fatigue investigations have considered the interrelation of slip character, dislocation substructure, and cracking in pure metals and solid-solution alloys. However, except for the studies of the low-strain fatigue of internally oxidized copper alloys1 and cast, dispersion-strengthened lead,' little is known about the effects which small, incoherent, nondeform-able, second-phase particles have on cyclic deformation and cracking processes. Effects due to the particles alone are often obscured by a dislocation substructure introduced during thermomechanical processing of dispersion-strengthened metals. In the present study, recrystallized TD-nickel and a commercially pure nickel, Ni-200, were employed to evaluate the effect of a thoria dispersion on high-strai fatigue deformation and cracking at room temperature. I) MATERIAL AND EXPERIMENTAL PROCEDURE The TD-nickel was supplied by DuPont as a 5/8-in.-thick stress-relieved plate which had been subjected to a proprietary schedule of thermomechanical treatments, and the Ni-200 as 3/4-in. bar which was subsequently annealed for 2 hr at 850°C in argon resulting in an average grain diameter of 0.05 mm. The compositions of these materials are given in Table I. The microstructure of the TD-nickel consisted of elongated grains parallel to the primary working direction with an average width of 0.16 mm, Fig. l(a). Many fine annealing twins were present indicating that the starting material was in a recrystallized condition; this supposition was confirmed by the absence of of any extensive dislocation substructure, Fig. l(b). Sheetlike stringers parallel to the rolling direction were occasionally seen both within grains and at grain boundaries. Some approximately spherical particles about 2 in diam, which may correspond to exceptionally large thoria particle aggregates, were also present. The average Young's modulus of the plate material in the rolling direction was 21.8 X 106 psi which is consistent with a {100}<001>recrystalliza-tion texture3'* being prominent. In transmission microscopy, the 2.3 vol pct of thoria particles generally appeared to be uniformly distributed although some clusters, 0.1 to 0.3 in diam, of larger particles were observed as previously reported for TD-nickel sheet,5 and stringering of particles was present in some areas as welt. The average diameter of the thoria particles was 450A with a calculated mean planar center-to-center spacing of 2100A, as determined by quantitative metallographic analysis.= The 0.2 pct offset yield stress was 36,000 psi which agrees with the value predicted by the modified Orowan relation7 for edge dislocations bowing between thoria particles of the size and spacing observed in the present investigation. Fig. 2 illustrates the specimen design employed for the axial high-strain fatigue testing. Adapters were screwed onto the threaded portions of each specimen so that testing could be performed in the same manner as that reported for buttonhead specimens.8 Stressing was coincident with the working direction for both materials. The gage section of each specimen was electropolished and lightly etched prior to testing. The total strain was controlled, being varied between zero and a maximum tensile strain ranging from 0.009 to 0.036. In addition to these tests, a circum-ferentially notched TD-nickel specimen was cycled over a total strain range of 0.0075. The same strain
Jan 1, 1969
-
Technical Notes - Effect of Feed Injection Position on Hydrocyclone PerformanceBy J. M. W. Mackenzie, C. J. Wood
In attempting to describe the size classification performance of a hydrocyclone, most workers have elected to use either an equilibrium orbit theory or an non-equilibrium orbit theory. The equilibrium orbit theory has been used by the majority of workers including Lilge,' Bradley; and Yoshioka and Hotta. In applying this theory, it is argued that particles in the body of a hydrocyclone attain an equilibrium radial position where the drag force on the particle resulting from the inward radial fluid velocity is balanced by the outward centrifugal force caused by the tangential component of fluid flow. When considered over the full height of the hydrocy-clone, attainment of this radial equilibrium orbit results in the particle following a conical equilibrium envelope. It is then argued that if this envelope lies outside the envelope of "zero vertical velocity," the particle will report to the underflow, while if the equilibrium envelope lies inside the envelope of "zero vertical velocity," the particle will report to the overflow or vortex finder product. The d50-sized particle which reports in equal quantities to the underflow and overflow is assumed to correspond to particles whose equilibrium envelope is coincident with the envelope of "zero vertical velocity." In considering the equilibrium orbit theory, it is apparent that the horizontal position of the particles in the feed inlet pipe should have no effect on their ultimate destination on the hydrocyclone. Each particle should attain an equilibrium position which depends on the density, size, and shape of the particle; the density and viscosity of the fluid; and the flow patterns within the hydrocy-clone. The nonequilibrium orbit or unsteady state theory has been largely developed by Rietema4 and Mizrahi.6 Mizrahi has listed four main objections to the equilibrium orbit theory. These objections center on the short residence time in the hydrocyclone, the fact that the experimental classification curve is much less sharp than is theoretically predicted, and the absence of negative efficiency conditions in hydro cyclones operating on a feed material which is much finer in size than d50. Proponents of the nonequilibrium orbit theory argue that for a particle to discharge with the underflow it must have sufficient outward radial velocity to reach the downward-flowing region close to the hydrocyclone wall in which the flow lines are parallel to the wall and the ratio of vertical to radial velocity is constant. It is then postulated that a d50 particle entering the cyclone at the center of the feed inlet will just reach this downward-flowing region as it reaches the apex. Thus for uniform distribution of particles across the feed inlet, half the d50 particles—that is, those injected in the half of the inlet area nearest the cyclone wall —will report to the underflow while those injected in the other half will not reach the downward-flowing region and will be carried inward to the center of the cyclone and thus report in the overflow. The exact thickness of the down-ward-flowing region of fluid adjacent to the outer wall of the hydrocyclone is uncertain but Mizrahi considers it to be equal to the apex radius minus the air core radius. Particles larger than d50 have a greater outward centrifugal force acting on them than the d50 particles and may reach the wall even if injected at a distance from the wall greater than Di/2 (Di is inlet diameter). Conversely, particles smaller than d50 may not reach the wall even if injected at a distance less than Di/2 from the cyclone wall. Since the equations put forward by the proponents of both theories yield approximately the same values of d50, it is not possible to decide between these theories by measurement of d50. It should be possible however to examine the theories by injecting a small stream of solids into the feed inlet of a hydrocyclone running on clear water. If the efficiency or classification curve is measured for various horizontal injection positions, then the curves should be coincident if the equilibrium orbit theory holds. If, however, the unsteady state theory describes the cyclone operation, then the classification curves should show finer d50 sizes for particles injected close to the cyclone wall. Experimental A 6-in.-diam hydrocyclone with geometry as in Figs. 1 and 2 was used. Quartz particles were injected as a 50% by wt pulp via an 1/8-in. steel probe. For each in-
Jan 1, 1971
-
Institute of Metals Division - Hydrogen Embrittlement of Steels (Discussion page 1327a)By W. M. Baldwin, J. T. Brown
The effect of hydrogen on the ductility, c, of SAE 1020 steel at strain rates, i, from 0.05 in. per in. per rnin to 19,000 in. per in. per rnin and at temperature, T, from +150° to —320°F was determined. The ductility surface of the embrittled steel reveals two domains: one in which and the other in which The usual "explanations" of hydrogen embrittlement are in accord with the first of these domains only. THE purpose of this investigation was a fuller A characterization of this of the investigation effects of varying temperature and strain rate on the fracture strain of hydrogen-charged steel. To be sure, it is known that low and high temperatures remove the embrittlement that hydrogen confers upon steels at room temperature,1 * see Fig. la and b, and that high strain rates have a similar effect,'-' see Fig. 2a, b, and c. However, the general effect of these two testing conditions on the fracture ductility of hydrogen-charged steels is not known, i.e., the three-dimensional graphical representation of fracture ductility as a function of temperature and strain rate is not known—only two traverses of the graph are available. The need for such a graph is not pedantic. To demonstrate this point, Fig. 3a, b, and c shows three of many three-dimensional graphs, all possible on the basis of the two traverses at hand. The important point (as will be developed in the Discussion) is that each of them would indicate a different basic mechanism for hydrogen embrittlement. It will be noted that the four types of ductility surfaces in Fig. 3a, b, and c may be characterized as follows: Material and Procedure Tensile tests were made at various temperatures and strain rates on a commercial grade of % in. round SAE 1020 steel in both a virgin state and as charged with hydrogen. The steel was spheroidized at 1250°F for 168 hr to give the unembrittled steel the lowest possible transition temperature. The steel was charged cathodically with hydrogen as follows: The specimen was attached to a 6 in. steel wire, degreased for 5 min in trichlorethylene, rinsed with water, and fixed in a plastic top in the center of a cylindrical platinum mesh anode. The assembly was placed in a 1000 milliliter beaker containing an electrolyte of 900 milliliters of 4 pct sulphuric acid and 10 milliliters of poison (2 grams of yellow phosphorous dissolved in 40 milliliters of carbon disulphide). A current density of 1 amp per sq in. was used which developed a 4 v drop across the two electrodes. All electrolysis was carried on at room temperature. Temperatures for tensile tests were obtained by immersing the specimens in baths of water (+70° to + 150°F), mixtures of liquid nitrogen and isopen-tane (+70° to —24O°F), and boiling nitrogen (-240" to-320°F). Specimens were tested in tension at strain rates of 0.05, 10, 100, 5000, and 19,000 in. per in. per min. The 0.05 and 10 in. per in. per rnin strain rates were obtained on a 10,000 lb Riehle tensile testing machine, the 100 in. per in. per rnin rate on a hydraulic-type draw bench with a special fixture, and the 500 and 19,000 in. per in. per rnin rates on a drop hammer. The fracture ductility of hydrogen-charged steel at room temperature and normal testing strain rates (-0.05 in. per in. per min) is a function of electro-lyzing time, dropping to a value that remains constant after a critical time.'* Under the conditions of • The hydrogen content of the steel continues to increase with charging time even after the ductility has leveled off to its saturated value.' this research the saturated loss in ductility occurred at approximately 30 min, see Fig. 4, and a 60 min charging time was taken as standard for all subsequent tests. After charging the steel with hydrogen, the surface was covered with blisters. These have been described by Seabrook, Grant, and Carney.' The original diameter of the specimen was not reduced by acid attack, even after 91 hr. Results The ductility of both uncharged and charged specimens is given as a function of strain rate in Fig. 5, and as a function of temperature at four different strain rates in Fig. 6. These results are assembled into a three-dimensional graph in Fig. 7. It is seen that the locus of the minima in the ductility curves of the charged steels divides the ductility surface into two domains. At temperatures below the minima,
Jan 1, 1955